Go to file
2023-07-05 13:59:22 +03:00
.github Create python-publish.yml 2023-05-26 11:40:46 +03:00
talk_codebase Bump version to 0.1.42 refactor BaseLLM 2023-07-05 13:59:22 +03:00
.gitignore Merge pull request #1 from rsaryev/feat/local 2023-05-30 09:24:38 +03:00
poetry.lock Bump version to 0.1.42 refactor BaseLLM 2023-07-05 13:59:22 +03:00
pyproject.toml Bump version to 0.1.42 refactor BaseLLM 2023-07-05 13:59:22 +03:00
README.md Update 2023-07-05 05:22:11 +03:00
requirements.txt Update transformers to version 4.30.2 in requirements.txt 2023-06-14 19:15:58 +01:00

talk-codebase

Node.js Package

  • Simple configuration in just a couple of clicks
  • Talk-codebase is a tool that allows you to converse with your codebase using LLMs (Large Language Models) to answer your queries.
  • It supports offline code processing using LlamaCpp and GPT4All without sharing your code with third parties, or you can use OpenAI if privacy is not a concern for you.
  • Talk-codebase is still under development, but it is a tool that can help you to improve your code. It is only recommended for educational purposes and not for production use.

chat

Installation

To install talk-codebase, you need to have:

# Install talk-codebase
pip install talk-codebase

# If you want some files to be ignored, add them to .gitignore.
# Once `talk-codebase` is installed, you can use it to chat with your codebase in the current directory by running the following command:
talk-codebase chat .

Reset configuration

# If you want to reset the configuration, you can run the following command:
talk-codebase configure

Advanced configuration

You can also edit the configuration manually by editing the ~/.config.yaml file. If for some reason you cannot find the configuration file, just run the tool and at the very beginning it will output the path to the configuration file.

# The OpenAI API key. You can get it from https://beta.openai.com/account/api-keys
api_key: sk-xxx

# Configuration for chunking
chunk_overlap: 50
chunk_size: 500

# Configuration for sampling
k: 4
max_tokens: 1048

# Configuration for the LLM model
openai_model_name: gpt-3.5-turbo
# Type of model to use. You can choose between `openai` and `local`.
model_type: openai
local_model_name: orca-mini-7b.ggmlv3.q4_0.bin
# Path to local model. If you want to use a local model, you need to specify the path to it.
model_path: 'absolute path to local model'

Supports the following extensions:

  • .csv
  • .doc
  • .docx
  • .epub
  • .md
  • .pdf
  • .txt
  • popular programming languages

Contributing

  • If you find a bug in talk-codebase, please report it on the project's issue tracker. When reporting a bug, please include as much information as possible, such as the steps to reproduce the bug, the expected behavior, and the actual behavior.
  • If you have an idea for a new feature for Talk-codebase, please open an issue on the project's issue tracker. When suggesting a feature, please include a brief description of the feature, as well as any rationale for why the feature would be useful.
  • You can contribute to talk-codebase by writing code. The project is always looking for help with improving the codebase, adding new features, and fixing bugs.