- Refactored the CLI and LLM classes to improve code organization and readability.
- Added a function to create an LLM instance based on the config.
- Moved the function to the and classes.
- Added a function to handle loading an existing vector store.
- Added a function to estimate the cost of creating a vector store for OpenAI models.
- Updated the function to prompt for the model type and path or API key depending on the type.
- Updated the function to use the function and method of the LLM instance.
- Updated the default config to include default values for and .
- Added a constant to store the default config values.
- Added a constant to store the default model path.
- Refactored the text loader in utils.py to improve readability and maintainability.
- Added a new function called in utils.py to check if the current directory is a git repository.
- Modified the function in utils.py to ignore files and directories ignored by git.
- Added error handling to function in utils.py to return if the current directory is not a git repository.
- Added a spinner using the Halo package to indicate when files are being loaded.
- The spinner appears while the load_files function is running and disappears once the function has completed.
- This should improve the user experience by providing feedback that the program is actively doing something.
- Add CLI functionality for chatting with OpenAI model
- Implement function to allow users to input OpenAI API key and model name
- Implement function to allow users to chat with OpenAI model using retrieved documents
- Add module to handle sending questions to OpenAI model
- Add module to load and split text documents, create retriever, and define StreamStdOut callback class