* feat: support vision
* clippy
* implement vision
* resolve data url to local file
* add model openai:gpt-4-vision-preview
* use newline to concate embeded text files
* set max_tokens for gpt-4-vision-preview
* refactor: redesign render
- if stdout is not terminal, just write reply text to stdout
- rename repl_render_stream to markdown_stream
- deprecate cmd_render_stream
- use raw_stream to just print streaming reply text
* optimize rendering error
* optimize render_stream
- rewrite Repl, remove ReplHandler
- move ReplyStreamHandler to repl/ and rename it to ReplyHandler
- deprecate utils::print_now
- refactor session info
- add register_clients macro to make it easier to add a new client
- no create_client_config, just add const PROMPTS
- move ModelInfo from clients/ to config/
- model's max_tokens are optional
- improve code quanity on config/mod.rs
- add/use macro config_get_fn