refactor: update description for cli options and config fields (#423)

pull/425/head
sigoden 2 months ago committed by GitHub
parent 5d763fc10c
commit 8ab57ed74e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -3,12 +3,12 @@ name = "aichat"
version = "0.16.0"
edition = "2021"
authors = ["sigoden <sigoden@gmail.com>"]
description = "All-in-one chat and copilot CLI that integrates 10+ AI platforms"
description = "All-in-one AI-Powered CLI Chat & Copilot"
license = "MIT OR Apache-2.0"
homepage = "https://github.com/sigoden/aichat"
repository = "https://github.com/sigoden/aichat"
categories = ["command-line-utilities"]
keywords = ["chatgpt", "localai", "gpt", "repl"]
keywords = ["chatgpt", "llm", "cli", "gpt", "repl"]
[dependencies]
anyhow = "1.0.69"

@ -1,38 +1,32 @@
# AIChat
# Aichat: All-in-one AI-Powered CLI Chat & Copilot
[![CI](https://github.com/sigoden/aichat/actions/workflows/ci.yaml/badge.svg)](https://github.com/sigoden/aichat/actions/workflows/ci.yaml)
[![Crates](https://img.shields.io/crates/v/aichat.svg)](https://crates.io/crates/aichat)
[![Discord](https://img.shields.io/discord/1226737085453701222?label=Discord)](https://discord.gg/dSHTvH6S)
All-in-one chat and copilot CLI that integrates 10+ AI platforms.
Command Mode:
Aichat is a AI-powered CLI chat and copilot tool that seamlessly integrates with over 10 leading AI platforms, providing a powerful combination of chat-based interaction, context-aware conversations, and AI-assisted shell capabilities, all within a customizable and user-friendly environment.
![command mode](https://github.com/sigoden/aichat/assets/4012553/2ab27e1b-4078-4ea3-a98f-591b36491685)
Chat REPL mode:
![chat-repl mode](https://github.com/sigoden/aichat/assets/4012553/13427d54-efd5-4f4c-b17b-409edd30dfa3)
## Features
## Key Features
- Supports [chat-REPL](#chat-repl)
- Supports [roles](#roles)
- Supports sessions (context-aware conversation)
- Supports image analysis (vision)
- [Shell commands](#shell-commands): Execute commands using natural language
- [Shell integration](#shell-integration): AI-powered shell autocompletion
- [Custom theme](https://github.com/sigoden/aichat/wiki/Custom-Theme)
- Stream/non-stream output
* **Converse with Advanced AI:** Access and interact with 10+ leading AI platforms including OpenAI, Claude, Gemini, and more, all within one interface.
* **Streamline Your Workflow:** Generate code, execute shell commands using natural language, and automate tasks with AI assistance.
* **Unleash Your Creativity:** Utilize AI for writing, translation, image analysis, and exploring new ideas.
* **Customize Your Experience:** Configure settings, create custom roles for AI, and personalize your chat interface.
* **Empower Your Terminal:** Integrate AI into your shell for intelligent autocompletion and command suggestions.
* **Context & Session Management:** Maintain context within conversations and manage multiple sessions effortlessly.
## Integrated platforms
## Supported AI Platforms
- OpenAI: GPT3.5/GPT4 (paid, vision)
- Azure-OpenAI (paid)
- OpenAI GPT-3.5/GPT-4 (paid, vision)
- Azure OpenAI (paid)
- OpenAI-Compatible platforms
- Gemini: Gemini-1.0/Gemini-1.5 (free, vision)
- VertexAI (paid, vision)
- Claude: Claude3 (vision, paid)
- Claude: Claude-3 (vision, paid)
- Mistral (paid)
- Cohere (paid)
- Ollama (free, local)
@ -41,57 +35,52 @@ Chat REPL mode:
## Install
### Use a package management tool
### Package Managers
For Rust programmer
```sh
cargo install aichat
```
- **Rust Developers:** `cargo install aichat`
- **Homebrew/Linuxbrew Users:** `brew install aichat`
- **Pacman Users**: `yay -S aichat`
- **Windows Scoop Users:** `scoop install aichat`
- **Android Termux Users:** `pkg install aichat`
For macOS Homebrew or a Linuxbrew user
```sh
brew install aichat
```
### Pre-built Binaries
For Windows Scoop user
```sh
scoop install aichat
```
For Android Termux user
```sh
pkg install aichat
```
Download pre-built binaries for macOS, Linux, and Windows from [GitHub Releases](https://github.com/sigoden/aichat/releases), extract them, and add the `aichat` binary to your `$PATH`.
### Binaries for macOS, Linux, and Windows
## Configuration
Download it from [GitHub Releases](https://github.com/sigoden/aichat/releases), unzip, and add aichat to your `$PATH`.
## Config
On first launch, aichat will guide you through the configuration.
Upon first launch, Aichat will guide you through the configuration process. An example configuration file is provided below:
```
> No config file, create a new one? Yes
> AI Platform: openai
> API Key: <your_api_key_here>
✨ Saved config file to <config-dir>/aichat/config.yaml
```
Feel free to adjust the configuration according to your needs.
> Get `config.yaml` path with command `aichat --info` or repl command `.info`.
```yaml
model: openai:gpt-3.5-turbo # LLM model
temperature: 1.0 # LLM temperature
save: true # Whether to save the message
save_session: null # Whether to save the session, if null, asking
highlight: true # Set false to turn highlight
light_theme: false # Whether to use a light theme
wrap: no # Specify the text-wrapping mode (no, auto, <max-width>)
wrap_code: false # Whether wrap code block
auto_copy: false # Automatically copy the last output to the clipboard
keybindings: emacs # REPL keybindings. values: emacs, vi
prelude: '' # Set a default role or session (role:<name>, session:<name>)
compress_threshold: 1000 # Compress session if tokens exceed this value (valid when >=1000)
model: openai:gpt-3.5-turbo # The Large Language Model (LLM) to use
temperature: 1.0 # Controls the randomness and creativity of the LLM's responses
save: true # Indicates whether to persist the message
save_session: null # Controls the persistence of the session, if null, asking the user
highlight: true # Controls syntax highlighting
light_theme: false # Activates a light color theme when true
wrap: no # Controls text wrapping (no, auto, <max-width>)
wrap_code: false # Enables or disables wrapping of code blocks
auto_copy: false # Enables or disables automatic copying the last LLM response to the clipboard
keybindings: emacs # Choose keybinding style (emacs, vi)
prelude: null # Set a default role or session to start with (role:<name>, session:<name>)
# Command that will be used to edit the current line buffer with ctrl+o
# if unset fallback to $EDITOR and $VISUAL
buffer_editor: null
# Compress session when token count reaches or exceeds this threshold (must be at least 1000)
compress_threshold: 1000
clients:
- type: openai
@ -101,15 +90,13 @@ clients:
name: localai
api_base: http://127.0.0.1:8080/v1
models:
- name: llama2
- name: llama3
max_input_tokens: 8192
```
Please review the [config.example.yaml](config.example.yaml) to see all available configuration options.
Refer to the [config.example.yaml](config.example.yaml) file for a complete list of configuration options. Environment variables can also be used for configuration; see the [Environment Variables](https://github.com/sigoden/aichat/wiki/Environment-Variables) page for details.
There are some configurations that can be set through environment variables, see [Environment Variables](https://github.com/sigoden/aichat/wiki/Environment-Variables).
## Command
## Command line
```
Usage: aichat [OPTIONS] [TEXT]...
@ -118,18 +105,19 @@ Arguments:
[TEXT]... Input text
Options:
-m, --model <MODEL> Choose a LLM model
-r, --role <ROLE> Choose a role
-s, --session [<SESSION>] Create or reuse a session
-e, --execute Execute commands using natural language
-c, --code Generate only code
-f, --file <FILE> Attach files to the message
-H, --no-highlight Disable syntax highlighting
-S, --no-stream No stream output
-w, --wrap <WRAP> Specify the text-wrapping mode (no, auto, <max-width>)
-m, --model <MODEL> Select a LLM model
-r, --role <ROLE> Select a role
-s, --session [<SESSION>] Start or join a session
--save-session Forces the session to be saved
-e, --execute Execute commands in natural language
-c, --code Output code only
-f, --file <FILE> Include files with the message
-H, --no-highlight Turn off syntax highlighting
-S, --no-stream Turns off stream mode
-w, --wrap <WRAP> Control text wrapping (no, auto, <max-width>)
--light-theme Use light theme
--dry-run Run in dry run mode
--info Print related information
--dry-run Display the message without sending it
--info Dispaly information
--list-models List all available models
--list-roles List all available roles
--list-sessions List all available sessions
@ -140,7 +128,7 @@ Options:
Here are some practical examples:
```sh
aichat # Start in REPL mode
aichat # Start REPL
aichat -e install nvim # Execute
aichat -c fibonacci in js # Code
@ -148,9 +136,9 @@ aichat -c fibonacci in js # Code
aichat -s # REPL + New session
aichat -s session1 # REPL + New/Reuse 'session1'
aichat --info # System info
aichat -r role1 --info # Role info
aichat -s session1 --info # Session info
aichat --info # View system info
aichat -r role1 --info # View role info
aichat -s session1 --info # View session info
cat data.toml | aichat -c to json > data.json # Pipe stdio/stdout
@ -167,32 +155,19 @@ Simply input what you want to do in natural language, and aichat will prompt and
aichat -e <text>...
```
Aichat is aware of OS and `$SHELL` you are using, it will provide shell command for specific system you have. For instance, if you ask `aichat` to update your system, it will return a command based on your OS. Here's an example using macOS:
Aichat is aware of OS and shell you are using, it will provide shell command for specific system you have. For instance, if you ask `aichat` to update your system, it will return a command based on your OS. Here's an example using macOS:
```sh
aichat -e update my system
```
$ aichat -e update my system
# sudo softwareupdate -i -a
# ? [e]xecute, [d]escribe, [a]bort: (e)
? [1]:execute [2]:explain [3]:revise [4]:cancel (1)
```
The same prompt, when used on Ubuntu, will generate a different suggestion:
```sh
aichat -e update my system
# sudo apt update && sudo apt upgrade -y
# ? [e]xecute, [d]escribe, [a]bort: (e)
```
We can still use pipes to pass input to aichat and generate shell commands:
```sh
aichat -e POST localhost with < data.json
# curl -X POST -H "Content-Type: application/json" -d '{"a": 1, "b": 2}' localhost
# ? [e]xecute, [d]escribe, [a]bort: (e)
```
We can also pipe the output of aichat which will disable interactive mode.
```sh
aichat -e find all json files in current folder | pbcopy
$ aichat -e update my system
sudo apt update && sudo apt upgrade -y
? [1]:execute [2]:explain [3]:revise [4]:cancel (1)
```
### Shell integration
@ -205,35 +180,9 @@ To install shell integration, go to [./scripts/shell-integration](https://github
### Generating code
By using the `--code` or `-c` parameter, you can specifically request pure code output, for instance:
```
aichat --code a echo server in node.js
```
By using the `--code` or `-c` parameter, you can specifically request pure code output.
```js
const net = require('net');
const server = net.createServer(socket => {
socket.on('data', data => {
socket.write(data);
});
socket.on('end', () => {
console.log('Client disconnected');
});
});
server.listen(3000, () => {
console.log('Server running on port 3000');
});
```
Since it is valid js code, we can redirect the output to a file:
```
aichat --code a echo server in node.js > echo-server.js
node echo-server.js
```
![aichat-code](https://github.com/sigoden/aichat/assets/4012553/2bbf7c8a-3822-4222-9498-693dcd683cf4)
**The `-c/--code` option ensures the extraction of code from Markdown.**
@ -241,40 +190,38 @@ node echo-server.js
Aichat has a powerful Chat REPL.
The REPL supports:
- Tab autocompletion
- [Custom REPL Prompt](https://github.com/sigoden/aichat/wiki/Custom-REPL-Prompt)
- Emacs/Vi keybinding
- Edit/paste multi-line text
- Open an editor to modify the current prompt
- History
- Undo support
**REPL Features:**
- **Convenient Tab Autocompletion:** Get suggestions for commands and functions while typing.
- **Customizable REPL Prompt:** Personalize the REPL interface by defining your own prompt.
- **Streamlined Keybindings:** Use familiar Emacs/Vi keybindings for efficient navigation and editing.
- **Multi-line Editing:** Create and edit multi-line inputs with ease.
- **External Editor Integration:** Open an external editor to refine the current inputs or write longer inputs.
- **History and Undo Support:** Access previously executed commands and undo any actions you make.
### `.help` - print help message
```
> .help
.help Print this help message
.info Print system info
.model Switch LLM model
.role Use a role
.info role Show the role info
.exit role Leave current role
.session Start a context-aware chat session
.info session Show the session info
.save session Save the session to the file
.clear messages Clear messages in the session
.help Show this help message
.info View system info
.model Change the current LLM
.prompt Make a temporary role using a prompt
.role Switch to a specific role
.info role View role info
.exit role Leave the role
.session Begin a chat session
.info session View session info
.save session Save the chat to file
.clear messages Erase messages in the current session
.exit session End the current session
.file Attach files to the message and then submit it
.set Modify the configuration parameters
.copy Copy the last reply to the clipboard
.file Read files and send them as input
.set Adjust settings
.copy Copy the last response
.exit Exit the REPL
Type ::: to begin multi-line editing, type ::: to end it.
Press Ctrl+O to open an editor to modify the current prompt.
Press Ctrl+C to abort readline, Ctrl+D to exit the REPL
Type ::: to start multi-line editing, type ::: to finish it.
Press Ctrl+O to open an editor to edit line input.
Press Ctrl+C to cancel the response, Ctrl+D to exit the REPL
```
### `.info` - view information
@ -285,19 +232,19 @@ model openai:gpt-3.5-turbo
temperature -
dry_run false
save true
save_session true
save_session -
highlight true
light_theme false
wrap no
wrap_code false
auto_copy false
auto_copy true
keybindings emacs
prelude -
compress_threshold 1000
config_file /home/alice/.config/aichat/config.yaml
roles_file /home/alice/.config/aichat/roles.yaml
messages_file /home/alice/.config/aichat/messages.md
sessions_dir /home/alice/.config/aichat/sessions
compress_threshold 2000
config_file /home/sigo/.config/aichat/config.yaml
roles_file /home/sigo/.config/aichat/roles.yaml
messages_file /home/sigo/.config/aichat/messages.md
sessions_dir /home/sigo/.config/aichat/sessions
```
### `.model` - choose a model
@ -307,7 +254,7 @@ sessions_dir /home/alice/.config/aichat/sessions
> .model ollama:llama2
```
> You can easily enter model name using autocomplete.
> You can easily enter model name using tab autocompletion.
### `.role` - let the AI play a role
@ -377,7 +324,18 @@ The prompt on the right side is about the current usage of tokens and the propor
compared to the maximum number of tokens allowed by the model.
### `.file` - attach files to the message
### `.prompt` - make a temporary role using a prompt
There are situations where setting a system message is necessary, but modifying the `roles.yaml` file is undesirable.
To address this, we leverage the `.prompt` to create a temporary role specifically for this purpose.
```
> .prompt write unit tests for the rust functions
%%>
```
### `.file` - include files with the message
```
Usage: .file <file>... [-- text...]
@ -406,7 +364,7 @@ Usage: .file <file>... [-- text...]
We can define a batch of roles in `roles.yaml`.
> Retrieve the location of `roles.yaml` through the REPL `.info` command or CLI `--info` option.
> Get `roles.yaml` path with command `aichat --info` or repl command `.info`.
For example, we can define a role:

@ -1,24 +1,24 @@
model: openai:gpt-3.5-turbo # LLM model
temperature: 1.0 # LLM temperature
save: true # Whether to save the message
save_session: null # Whether to save the session, if null, asking
highlight: true # Set false to turn highlight
light_theme: false # Whether to use a light theme
wrap: no # Specify the text-wrapping mode (no, auto, <max-width>)
wrap_code: false # Whether wrap code block
auto_copy: false # Automatically copy the last output to the clipboard
keybindings: emacs # REPL keybindings. (emacs, vi)
prelude: null # Set a default role or session (role:<name>, session:<name>)
model: openai:gpt-3.5-turbo # The Large Language Model (LLM) to use
temperature: 1.0 # Controls the randomness and creativity of the LLM's responses
save: true # Indicates whether to persist the message
save_session: null # Controls the persistence of the session, if null, asking the user
highlight: true # Controls syntax highlighting
light_theme: false # Activates a light color theme when true
wrap: no # Controls text wrapping (no, auto, <max-width>)
wrap_code: false # Enables or disables wrapping of code blocks
auto_copy: false # Enables or disables automatic copying the last LLM response to the clipboard
keybindings: emacs # Choose keybinding style (emacs, vi)
prelude: null # Set a default role or session to start with (role:<name>, session:<name>)
# Command that will be used to edit the current line buffer with ctrl+o
# if unset fallback to $EDITOR and $VISUAL
buffer_editor: null
# Compress session if tokens exceed this value (valid when >=1000)
# Compress session when token count reaches or exceeds this threshold (must be at least 1000)
compress_threshold: 1000
# The prompt for summarizing session messages
# Text prompt used for creating a concise summary of session message
summarize_prompt: 'Summarize the discussion briefly in 200 words or less to use as a prompt for future context.'
# The prompt for the summary of the session
# Text prompt used for including the summary of the entire session
summary_prompt: 'This is a summary of the chat history as a recap: '
# Custom REPL prompt, see https://github.com/sigoden/aichat/wiki/Custom-REPL-Prompt
@ -42,7 +42,8 @@ clients:
# See https://ai.google.dev/docs
- type: gemini
api_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
block_threshold: BLOCK_NONE # Optional field, choices: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE
# Optional field, possible values: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE
block_threshold: BLOCK_NONE
# See https://docs.anthropic.com/claude/reference/getting-started-with-the-api
- type: claude
@ -91,11 +92,12 @@ clients:
# See https://cloud.google.com/vertex-ai
- type: vertexai
api_base: https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/publishers/google/models
# Setup Application Default Credentials (ADC) file, Optional field
# Run `gcloud auth application-default login` to setup adc
# Specifies a application-default-credentials (adc) file, Optional field
# Run `gcloud auth application-default login` to init the adc file
# see https://cloud.google.com/docs/authentication/external/set-up-adc
adc_file: <path-to/gcloud/application_default_credentials.json>
block_threshold: BLOCK_ONLY_HIGH # Optional field, choices: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE
# Optional field, possible values: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE
block_threshold: BLOCK_ONLY_HIGH
# See https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html
- type: ernie

@ -1,16 +1,16 @@
complete -c aichat -s m -l model -x -a "(aichat --list-models)" -d 'Choose a LLM model' -r
complete -c aichat -s r -l role -x -a "(aichat --list-roles)" -d 'Choose a role' -r
complete -c aichat -s s -l session -x -a"(aichat --list-sessions)" -d 'Create or reuse a session' -r
complete -c aichat -s f -l file -d 'Attach files to the message' -r -F
complete -c aichat -s w -l wrap -d 'Specify the text-wrapping mode (no, auto, <max-width>)'
complete -c aichat -l save-session -d 'Whether to save the session'
complete -c aichat -s e -l execute -d 'Execute commands using natural language'
complete -c aichat -s c -l code -d 'Generate only code'
complete -c aichat -s H -l no-highlight -d 'Disable syntax highlighting'
complete -c aichat -s S -l no-stream -d 'No stream output'
complete -c aichat -s m -l model -x -a "(aichat --list-models)" -d 'Select a LLM model' -r
complete -c aichat -s r -l role -x -a "(aichat --list-roles)" -d 'Select a role' -r
complete -c aichat -s s -l session -x -a"(aichat --list-sessions)" -d 'Start or join a session' -r
complete -c aichat -s f -l file -d 'Include files with the message' -r -F
complete -c aichat -s w -l wrap -d 'Control text wrapping (no, auto, <max-width>)'
complete -c aichat -l save-session -d 'Forces the session to be saved'
complete -c aichat -s e -l execute -d 'Execute commands in natural language'
complete -c aichat -s c -l code -d 'Output code only'
complete -c aichat -s H -l no-highlight -d 'Turn off syntax highlighting'
complete -c aichat -s S -l no-stream -d 'Turns off stream mode'
complete -c aichat -l light-theme -d 'Use light theme'
complete -c aichat -l dry-run -d 'Run in dry run mode'
complete -c aichat -l info -d 'Print related information'
complete -c aichat -l dry-run -d 'Display the message without sending it'
complete -c aichat -l info -d 'Dispaly information'
complete -c aichat -l list-models -d 'List all available models'
complete -c aichat -l list-roles -d 'List all available roles'
complete -c aichat -l list-sessions -d 'List all available sessions'

@ -24,19 +24,19 @@ module completions {
# All-in-one chat and copilot CLI that integrates 10+ AI platforms
export extern aichat [
--model(-m): string@"nu-complete aichat model" # Choose a LLM model
--role(-r): string@"nu-complete aichat role" # Choose a role
--session(-s): string@"nu-complete aichat role" # Create or reuse a session
--save-session # Whether to save the session
--execute(-e) # Execute commands using natural language
--code(-c) # Generate only code
--file(-f): string # Attach files to the message
--no-highlight(-H) # Disable syntax highlighting
--no-stream(-S) # No stream output
--wrap(-w): string # Specify the text-wrapping mode (no, auto, <max-width>)
--model(-m): string@"nu-complete aichat model" # Select a LLM model
--role(-r): string@"nu-complete aichat role" # Select a role
--session(-s): string@"nu-complete aichat role" # Start or join a session
--save-session # Forces the session to be saved
--execute(-e) # Execute commands in natural language
--code(-c) # Output code only
--file(-f): string # Include files with the message
--no-highlight(-H) # Turn off syntax highlighting
--no-stream(-S) # Turns off stream mode
--wrap(-w): string # Control text wrapping (no, auto, <max-width>)
--light-theme # Use light theme
--dry-run # Run in dry run mode
--info # Print related information
--dry-run # Display the message without sending it
--info # Dispaly information
--list-models # List all available models
--list-roles # List all available roles
--list-sessions # List all available sessions

@ -20,28 +20,28 @@ Register-ArgumentCompleter -Native -CommandName 'aichat' -ScriptBlock {
$completions = @(switch ($command) {
'aichat' {
[CompletionResult]::new('-m', '-m', [CompletionResultType]::ParameterName, 'Choose a LLM model')
[CompletionResult]::new('--model', '--model', [CompletionResultType]::ParameterName, 'Choose a LLM model')
[CompletionResult]::new('-r', '-r', [CompletionResultType]::ParameterName, 'Choose a role')
[CompletionResult]::new('--role', '--role', [CompletionResultType]::ParameterName, 'Choose a role')
[CompletionResult]::new('-s', '-s', [CompletionResultType]::ParameterName, 'Create or reuse a session')
[CompletionResult]::new('--session', '--session', [CompletionResultType]::ParameterName, 'Create or reuse a session')
[CompletionResult]::new('-f', '-f', [CompletionResultType]::ParameterName, 'Attach files to the message')
[CompletionResult]::new('--file', '--file', [CompletionResultType]::ParameterName, 'Attach files to the message')
[CompletionResult]::new('-w', '-w', [CompletionResultType]::ParameterName, 'Specify the text-wrapping mode (no, auto, <max-width>)')
[CompletionResult]::new('--wrap', '--wrap', [CompletionResultType]::ParameterName, 'Specify the text-wrapping mode (no, auto, <max-width>)')
[CompletionResult]::new('--save-session', '--save-session', [CompletionResultType]::ParameterName, 'Whether to save the session')
[CompletionResult]::new('-e', '-e', [CompletionResultType]::ParameterName, 'Execute commands using natural language')
[CompletionResult]::new('--execute', '--execute', [CompletionResultType]::ParameterName, 'Execute commands using natural language')
[CompletionResult]::new('-c', '-c', [CompletionResultType]::ParameterName, 'Generate only code')
[CompletionResult]::new('--code', '--code', [CompletionResultType]::ParameterName, 'Generate only code')
[CompletionResult]::new('-H', '-H', [CompletionResultType]::ParameterName, 'Disable syntax highlighting')
[CompletionResult]::new('--no-highlight', '--no-highlight', [CompletionResultType]::ParameterName, 'Disable syntax highlighting')
[CompletionResult]::new('-S', '-S', [CompletionResultType]::ParameterName, 'No stream output')
[CompletionResult]::new('--no-stream', '--no-stream', [CompletionResultType]::ParameterName, 'No stream output')
[CompletionResult]::new('-m', '-m', [CompletionResultType]::ParameterName, 'Select a LLM model')
[CompletionResult]::new('--model', '--model', [CompletionResultType]::ParameterName, 'Select a LLM model')
[CompletionResult]::new('-r', '-r', [CompletionResultType]::ParameterName, 'Select a role')
[CompletionResult]::new('--role', '--role', [CompletionResultType]::ParameterName, 'Select a role')
[CompletionResult]::new('-s', '-s', [CompletionResultType]::ParameterName, 'Start or join a session')
[CompletionResult]::new('--session', '--session', [CompletionResultType]::ParameterName, 'Start or join a session')
[CompletionResult]::new('-f', '-f', [CompletionResultType]::ParameterName, 'Include files with the message')
[CompletionResult]::new('--file', '--file', [CompletionResultType]::ParameterName, 'Include files with the message')
[CompletionResult]::new('-w', '-w', [CompletionResultType]::ParameterName, 'Control text wrapping (no, auto, <max-width>)')
[CompletionResult]::new('--wrap', '--wrap', [CompletionResultType]::ParameterName, 'Control text wrapping (no, auto, <max-width>)')
[CompletionResult]::new('--save-session', '--save-session', [CompletionResultType]::ParameterName, 'Forces the session to be saved')
[CompletionResult]::new('-e', '-e', [CompletionResultType]::ParameterName, 'Execute commands in natural language')
[CompletionResult]::new('--execute', '--execute', [CompletionResultType]::ParameterName, 'Execute commands in natural language')
[CompletionResult]::new('-c', '-c', [CompletionResultType]::ParameterName, 'Output code only')
[CompletionResult]::new('--code', '--code', [CompletionResultType]::ParameterName, 'Output code only')
[CompletionResult]::new('-H', '-H', [CompletionResultType]::ParameterName, 'Turn off syntax highlighting')
[CompletionResult]::new('--no-highlight', '--no-highlight', [CompletionResultType]::ParameterName, 'Turn off syntax highlighting')
[CompletionResult]::new('-S', '-S', [CompletionResultType]::ParameterName, 'Turns off stream mode')
[CompletionResult]::new('--no-stream', '--no-stream', [CompletionResultType]::ParameterName, 'Turns off stream mode')
[CompletionResult]::new('--light-theme', '--light-theme', [CompletionResultType]::ParameterName, 'Use light theme')
[CompletionResult]::new('--dry-run', '--dry-run', [CompletionResultType]::ParameterName, 'Run in dry run mode')
[CompletionResult]::new('--info', '--info', [CompletionResultType]::ParameterName, 'Print related information')
[CompletionResult]::new('--dry-run', '--dry-run', [CompletionResultType]::ParameterName, 'Display the message without sending it')
[CompletionResult]::new('--info', '--info', [CompletionResultType]::ParameterName, 'Dispaly information')
[CompletionResult]::new('--list-models', '--list-models', [CompletionResultType]::ParameterName, 'List all available models')
[CompletionResult]::new('--list-roles', '--list-roles', [CompletionResultType]::ParameterName, 'List all available roles')
[CompletionResult]::new('--list-sessions', '--list-sessions', [CompletionResultType]::ParameterName, 'List all available sessions')

@ -15,28 +15,28 @@ _aichat() {
local context curcontext="$curcontext" state line
local common=(
'-m+[Choose a LLM model]:MODEL:->models' \
'--model=[Choose a LLM model]:MODEL:->models' \
'-r+[Choose a role]:ROLE:->roles' \
'--role=[Choose a role]:ROLE:->roles' \
'-s+[Create or reuse a session]:SESSION:->sessions' \
'--session=[Create or reuse a session]:SESSION:->sessions' \
'*-f+[Attach files to the message]:FILE:_files' \
'*--file=[Attach files to the message]:FILE:_files' \
'-w+[Specify the text-wrapping mode (no, auto, <max-width>)]:WRAP: ' \
'--wrap=[Specify the text-wrapping mode (no, auto, <max-width>)]:WRAP: ' \
'--save-session[Whether to save the session]' \
'-e[Execute commands using natural language]' \
'--execute[Execute commands using natural language]' \
'-c[Generate only code]' \
'--code[Generate only code]' \
'-H[Disable syntax highlighting]' \
'--no-highlight[Disable syntax highlighting]' \
'-S[No stream output]' \
'--no-stream[No stream output]' \
'-m+[Select a LLM model]:MODEL:->models' \
'--model=[Select a LLM model]:MODEL:->models' \
'-r+[Select a role]:ROLE:->roles' \
'--role=[Select a role]:ROLE:->roles' \
'-s+[Start or join a session]:SESSION:->sessions' \
'--session=[Start or join a session]:SESSION:->sessions' \
'*-f+[Include files with the message]:FILE:_files' \
'*--file=[Include files with the message]:FILE:_files' \
'-w+[Control text wrapping (no, auto, <max-width>)]:WRAP: ' \
'--wrap=[Control text wrapping (no, auto, <max-width>)]:WRAP: ' \
'--save-session[Forces the session to be saved]' \
'-e[Execute commands in natural language]' \
'--execute[Execute commands in natural language]' \
'-c[Output code only]' \
'--code[Output code only]' \
'-H[Turn off syntax highlighting]' \
'--no-highlight[Turn off syntax highlighting]' \
'-S[Turns off stream mode]' \
'--no-stream[Turns off stream mode]' \
'--light-theme[Use light theme]' \
'--dry-run[Run in dry run mode]' \
'--info[Print related information]' \
'--dry-run[Display the message without sending it]' \
'--info[Dispaly information]' \
'--list-models[List all available models]' \
'--list-roles[List all available roles]' \
'--list-sessions[List all available sessions]' \

@ -3,43 +3,43 @@ use clap::Parser;
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
pub struct Cli {
/// Choose a LLM model
/// Select a LLM model
#[clap(short, long)]
pub model: Option<String>,
/// Choose a role
/// Select a role
#[clap(short, long)]
pub role: Option<String>,
/// Create or reuse a session
/// Start or join a session
#[clap(short = 's', long)]
pub session: Option<Option<String>>,
/// Whether to save the session
/// Forces the session to be saved
#[clap(long)]
pub save_session: bool,
/// Execute commands using natural language
/// Execute commands in natural language
#[clap(short = 'e', long)]
pub execute: bool,
/// Generate only code
/// Output code only
#[clap(short = 'c', long)]
pub code: bool,
/// Attach files to the message
/// Include files with the message
#[clap(short = 'f', long, value_name = "FILE")]
pub file: Vec<String>,
/// Disable syntax highlighting
/// Turn off syntax highlighting
#[clap(short = 'H', long)]
pub no_highlight: bool,
/// No stream output
/// Turns off stream mode
#[clap(short = 'S', long)]
pub no_stream: bool,
/// Specify the text-wrapping mode (no, auto, <max-width>)
/// Control text wrapping (no, auto, <max-width>)
#[clap(short = 'w', long)]
pub wrap: Option<String>,
/// Use light theme
#[clap(long)]
pub light_theme: bool,
/// Run in dry run mode
/// Display the message without sending it
#[clap(long)]
pub dry_run: bool,
/// Print related information
/// Dispaly information
#[clap(long)]
pub info: bool,
/// List all available models

@ -44,52 +44,30 @@ const CLIENTS_FIELD: &str = "clients";
#[derive(Debug, Clone, Deserialize)]
#[serde(default)]
pub struct Config {
/// LLM model
#[serde(rename(serialize = "model", deserialize = "model"))]
pub model_id: Option<String>,
/// LLM temperature
pub temperature: Option<f64>,
/// Dry-run flag
pub dry_run: bool,
/// Whether to save the message
pub save: bool,
/// Whether to save the session
pub save_session: Option<bool>,
/// Whether to disable highlight
pub highlight: bool,
/// Whether to use a light theme
pub light_theme: bool,
/// Specify the text-wrapping mode (no, auto, <max-width>)
pub wrap: Option<String>,
/// Whether wrap code block
pub wrap_code: bool,
/// Automatically copy the last output to the clipboard
pub auto_copy: bool,
/// REPL keybindings. (emacs, vi)
pub keybindings: Keybindings,
/// Set a default role or session (role:<name>, session:<name>)
pub prelude: Option<String>,
/// Command that will be used to edit the current line buffer
pub buffer_editor: Option<String>,
/// Compress session if tokens exceed this value (>=1000)
pub compress_threshold: usize,
/// The prompt for summarizing session messages
pub summarize_prompt: String,
// The prompt for the summary of the session
pub summary_prompt: String,
/// REPL left prompt
pub left_prompt: String,
/// REPL right prompt
pub right_prompt: String,
/// Setup clients
pub clients: Vec<ClientConfig>,
/// Predefined roles
#[serde(skip)]
pub roles: Vec<Role>,
/// Current selected role
#[serde(skip)]
pub role: Option<Role>,
/// Current session
#[serde(skip)]
pub session: Option<Session>,
#[serde(skip)]

@ -169,7 +169,7 @@ fn execute(config: &GlobalConfig, mut input: Input) -> Result<()> {
println!("{}", markdown_render.render(&eval_str).trim());
let mut explain = false;
loop {
let answer = Text::new("[1]:execute [2]:explain [3]:revise [4]:exit")
let answer = Text::new("[1]:execute [2]:explain [3]:revise [4]:cancel")
.with_default("1")
.with_validator(|input: &str| match matches!(input, "1" | "2" | "3" | "4") {
true => Ok(Validation::Valid),

@ -66,7 +66,7 @@ fn markdown_stream_inner(
let columns = terminal::size()?.0;
let mut spinner = Spinner::new(" Thinking");
let mut spinner = Spinner::new(" Generating");
'outer: loop {
if abort.aborted() {

@ -26,35 +26,31 @@ const MENU_NAME: &str = "completion_menu";
lazy_static! {
static ref REPL_COMMANDS: [ReplCommand; 16] = [
ReplCommand::new(".help", "Print this help message", State::all()),
ReplCommand::new(".info", "Print system info", State::all()),
ReplCommand::new(".model", "Switch LLM model", State::all()),
ReplCommand::new(".help", "Show this help message", State::all()),
ReplCommand::new(".info", "View system info", State::all()),
ReplCommand::new(".model", "Change the current LLM", State::all()),
ReplCommand::new(
".prompt",
"Use a temp role with this prompt",
"Make a temporary role using a prompt",
State::able_change_role()
),
ReplCommand::new(".role", "Use a role", State::able_change_role()),
ReplCommand::new(".info role", "Show the role info", State::in_role(),),
ReplCommand::new(".exit role", "Leave current role", State::in_role(),),
ReplCommand::new(
".session",
"Start a context-aware chat session",
State::not_in_session(),
),
ReplCommand::new(
".info session",
"Show the session info",
State::in_session(),
".role",
"Switch to a specific role",
State::able_change_role()
),
ReplCommand::new(".info role", "View role info", State::in_role(),),
ReplCommand::new(".exit role", "Leave the role", State::in_role(),),
ReplCommand::new(".session", "Begin a chat session", State::not_in_session(),),
ReplCommand::new(".info session", "View session info", State::in_session(),),
ReplCommand::new(
".save session",
"Save the session to the file",
"Save the chat to file",
State::in_session(),
),
ReplCommand::new(
".clear messages",
"Clear messages in the session",
"Erase messages in the current session",
State::unable_change_role()
),
ReplCommand::new(
@ -62,17 +58,9 @@ lazy_static! {
"End the current session",
State::in_session(),
),
ReplCommand::new(
".file",
"Attach files to the message and then submit it",
State::all()
),
ReplCommand::new(".set", "Modify the configuration parameters", State::all()),
ReplCommand::new(
".copy",
"Copy the last reply to the clipboard",
State::all()
),
ReplCommand::new(".file", "Include files with the message", State::all()),
ReplCommand::new(".set", "Adjust settings", State::all()),
ReplCommand::new(".copy", "Copy the last response", State::all()),
ReplCommand::new(".exit", "Exit the REPL", State::all()),
];
static ref COMMAND_RE: Regex = Regex::new(r"^\s*(\.\S*)\s*").unwrap();
@ -420,9 +408,9 @@ fn dump_repl_help() {
println!(
r###"{head}
Type ::: to begin multi-line editing, type ::: to end it.
Type ::: to start multi-line editing, type ::: to finish it.
Press Ctrl+O to open an editor to edit line input.
Press Ctrl+C to cancel reply, Ctrl+D to exit REPL"###,
Press Ctrl+C to cancel the response, Ctrl+D to exit the REPL"###,
);
}

Loading…
Cancel
Save