Go to file
2024-02-13 15:06:52 +00:00
.github
application
Assets
docs
extensions
frontend
mock-backend
scripts
tests
.env-template
.gitignore
.ruff.toml
CODE_OF_CONDUCT.md
codecov.yml
CONTRIBUTING.md
docker-compose-azure.yaml
docker-compose-dev.yaml
docker-compose-local.yaml
docker-compose-mock.yaml
docker-compose.yaml
LICENSE
package-lock.json
package.json
Readme Logo.png
README.md
run-with-docker-compose.sh
setup.sh

DocsGPT 🦖

Open-Source Documentation Assistant

DocsGPT is a cutting-edge open-source solution that streamlines the process of finding information in the project documentation. With its integration of the powerful GPT models, developers can easily ask questions about a project and receive accurate answers.

Say goodbye to time-consuming manual searches, and let DocsGPT help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.

link to main GitHub showing Stars number link to main GitHub showing Forks number link to license file link to discord X (formerly Twitter) URL

Production Support / Help for Companies:

We're eager to provide personalized assistance when deploying your DocsGPT to a live environment.

video-example-of-docs-gpt

Roadmap

You can find our roadmap here. Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!

Our Open-Source Models Optimized for DocsGPT:

Name Base Model Requirements (or similar)
Docsgpt-7b-falcon Falcon-7b 1xA10G gpu
Docsgpt-14b llama-2-14b 2xA10 gpu's
Docsgpt-40b-falcon falcon-40b 8xA10G gpu's

If you don't have enough resources to run it, you can use bitsnbytes to quantize.

Features

Main features of DocsGPT showcasing six main features

Project Structure

  • Application - Flask app (main application).

  • Extensions - Chrome extension.

  • Scripts - Script that creates similarity search index for other libraries.

  • Frontend - Frontend uses Vite and React.

QuickStart

Note

Make sure you have Docker installed

On Mac OS or Linux, write:

./setup.sh

It will install all the dependencies and allow you to download the local model, use OpenAI or use our LLM API.

Otherwise, refer to this Guide:

  1. Download and open this repository with git clone https://github.com/arc53/DocsGPT.git

  2. Create a .env file in your root directory and set the env variables and VITE_API_STREAMING to true or false, depending on whether you want streaming answers or not. It should look like this inside:

    LLM_NAME=[docsgpt or openai or others] 
    VITE_API_STREAMING=true
    API_KEY=[if LLM_NAME is openai]
    

    See optional environment variables in the /.env-template and /application/.env_sample files.

  3. Run ./run-with-docker-compose.sh.

  4. Navigate to http://localhost:5173/.

To stop, just run Ctrl + C.

Development Environments

Spin up Mongo and Redis

For development, only two containers are used from docker-compose.yaml (by deleting all services except for Redis and Mongo). See file docker-compose-dev.yaml.

Run

docker compose -f docker-compose-dev.yaml build
docker compose -f docker-compose-dev.yaml up -d

Run the Backend

Note

Make sure you have Python 3.10 or 3.11 installed.

  1. Export required environment variables or prepare a .env file in the /application folder:

(check out application/core/settings.py if you want to see more config options.)

  1. (optional) Create a Python virtual environment: You can follow the Python official documentation for virtual environments.

a) On Mac OS and Linux

python -m venv venv
. venv/bin/activate

b) On Windows

python -m venv venv
 venv/Scripts/activate
  1. Download embedding model and save it in the model/ folder: You can use the script below, or download it manually from here, unzip it and save it in the model/ folder.
wget https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip
unzip mpnet-base-v2.zip -d model
rm mpnet-base-v2.zip

4. Change to the `application/` subdir by the command `cd application/` and install dependencies for the backend:

```commandline
pip install -r requirements.txt
  1. Run the app using flask --app application/app.py run --host=0.0.0.0 --port=7091.
  2. Start worker with celery -A application.app.celery worker -l INFO.

Start Frontend

Note

Make sure you have Node version 16 or higher.

  1. Navigate to the /frontend folder.
  2. Install the required packages husky and vite (ignore if already installed).
npm install husky -g
npm install vite -g
  1. Install dependencies by running npm install --include=dev.
  2. Run the app using npm run dev.

Contributing

Please refer to the CONTRIBUTING.md file for information about how to get involved. We welcome issues, questions, and pull requests.

Code Of Conduct

We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the CODE_OF_CONDUCT.md file for more information about contributing.

Many Thanks To Our Contributors

Contributors

License

The source code license is MIT, as described in the LICENSE file.

Built with 🐦 🔗 LangChain