diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b83ed69..e69228c 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,41 +1,45 @@ # Welcome to DocsGPT Contributing Guidelines -Thank you for choosing this project to contribute to. We are all very grateful! +Thank you for choosing to contribute to DocsGPT! We are all very grateful! ### [πŸŽ‰ Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! πŸŽ‰](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md) # We accept different types of contributions -πŸ“£ **Discussions** - where you can start a new topic or answer some questions +πŸ“£ **Discussions** - Engage in conversations, start new topics, or help answer questions. -🐞 **Issues** - This is how we track tasks, sometimes it is bugs that need fixing, and sometimes it is new features +🐞 **Issues** - This is where we keep track of tasks. It could be bugs,fixes or suggestions for new features. -πŸ› οΈ **Pull requests** - This is how you can suggest changes to our repository, to work on existing issues or add new features +πŸ› οΈ **Pull requests** - Suggest changes to our repository, either by working on existing issues or adding new features. -πŸ“š **Wiki** - where we have our documentation +πŸ“š **Wiki** - This is where our documentation resides. ## 🐞 Issues and Pull requests -We value contributions to our issues in the form of discussion or suggestions. We recommend that you check out existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2). +We value contributions in the form of discussions or suggestions. We recommend taking a look at existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2). -If you want to contribute by writing code, there are a few things that you should know before doing it: +Before creating issues, please check out how the latest version of our app looks and works by launching it via [Quickstart](https://github.com/arc53/DocsGPT#quickstart) the version on our live demo is slightly modified with login. Your issues should relate to the version that you can launch via [Quickstart](https://github.com/arc53/DocsGPT#quickstart). + +If you're interested in contributing code, here are some important things to know: + +We have a frontend built with React (Vite) and a backend in Python. -We have a frontend in React (Vite) and backend in Python. ### If you are looking to contribute to frontend (βš›οΈReact, Vite): -- The current frontend is being migrated from `/application` to `/frontend` with a new design, so please contribute to the new one. +- The current frontend is being migrated from [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) to [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) with a new design, so please contribute to the new one. - Check out this [milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues. - The Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1). Please try to follow the guidelines. ### If you are looking to contribute to Backend (🐍 Python): -- Check out our issues and contribute to `/application` or `/scripts` (ignore old `ingest_rst.py` `ingest_rst_sphinx.py` files; they will be deprecated soon). -- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder. -- Before submitting your PR, ensure it is queryable after ingesting some test data. +- Review our issues and contribute to [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts) (please disregard old [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py) files; they will be deprecated soon). +- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder. +- Before submitting your Pull Request, ensure it can be queried after ingesting some test data. + ### Testing To run unit tests from the root of the repository, execute: @@ -43,10 +47,11 @@ To run unit tests from the root of the repository, execute: python -m pytest ``` -### Workflow: -Create a fork, make changes on your forked repository, and submit changes as a pull request. +### Workflow πŸ“ˆ : +- Fork repository +- Make the required changes on your forked version +- Commit those changes and submit those as a pull request so that it reflects on thr main repository. ## Questions/collaboration -Please join our [Discord](https://discord.gg/n5BX8dh8rU). Don't hesitate; we are very friendly and welcoming to new contributors. - +Feel free to join our [Discord](https://discord.gg/n5BX8dh8rU). We're very friendly and welcoming to new contributors, so don't hesitate to reach out. # Thank you so much for considering contributing to DocsGPT!πŸ™ diff --git a/HACKTOBERFEST.md b/HACKTOBERFEST.md index 1a39e56..b164661 100644 --- a/HACKTOBERFEST.md +++ b/HACKTOBERFEST.md @@ -17,14 +17,14 @@ Familiarize yourself with the current contributions and our [Roadmap](https://gi Deciding to contribute with code? Here are some insights based on the area of your interest: - Frontend (βš›οΈReact, Vite): - - Most of the code is located in `/frontend` folder. You can also check out our React extension in /extensions/react-widget. + - Most of the code is located in [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) folder. You can also check out our React extension in [`/extensions/react-widget`](https://github.com/arc53/DocsGPT/tree/main/extensions/react-widget). - For design references, here's the [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1). - Ensure you adhere to the established guidelines. - Backend (🐍Python): - - Focus on `/application` or `/scripts`. However, avoid the files ingest_rst.py and ingest_rst_sphinx.py, as they will soon be deprecated. + - Focus on [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts). However, avoid the files [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) and [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py), as they will soon be deprecated. - Newly added code should come with relevant unit tests (pytest). - - Refer to the `/tests` folder for test suites. + - Refer to the [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder for test suites. Check out our [Contributing Guidelines](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md) diff --git a/README.md b/README.md index 0fa6a7a..db56186 100644 --- a/README.md +++ b/README.md @@ -7,9 +7,9 @@

- DocsGPT is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful GPT models, developers can easily ask questions about a project and receive accurate answers. + DocsGPT is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful GPT models, developers can easily ask questions about a project and receive accurate answers. -Say goodbye to time-consuming manual searches, and let DocsGPT help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance. +Say goodbye to time-consuming manual searches, and let DocsGPT help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.

@@ -24,7 +24,7 @@ Say goodbye to time-consuming manual searches, and let DocsGPT ### Production Support / Help for companies: We're eager to provide personalized assistance when deploying your DocsGPT to a live environment. -- [Get Support πŸ‘‹](https://airtable.com/appdeaL0F1qV8Bl2C/shrrJF1Ll7btCJRbP) +- [Book Demo πŸ‘‹](https://cal.com/arc53/docsgpt-demo-b2b) - [Send Email βœ‰οΈ](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions) ### [πŸŽ‰ Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! πŸŽ‰](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md) @@ -54,17 +54,20 @@ If you don't have enough resources to run it, you can use bitsnbytes to quantize ## Useful links - [Live preview](https://docsgpt.arc53.com/) + + - πŸ”πŸ”₯ [Live preview](https://docsgpt.arc53.com/) - [Join our Discord](https://discord.gg/n5BX8dh8rU) + - πŸ’¬πŸŽ‰[Join our Discord](https://discord.gg/n5BX8dh8rU) - [Guides](https://docs.docsgpt.co.uk/) + - πŸ“šπŸ˜Ž [Guides](https://docs.docsgpt.co.uk/) + + - πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’» [Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md) + + - πŸ—‚οΈπŸš€ [How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation) - [Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md) + - πŸ πŸ” [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM) - [How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation) - [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM) ## Project structure @@ -89,15 +92,15 @@ It will install all the dependencies and allow you to download the local model o Otherwise, refer to this Guide: 1. Download and open this repository with `git clone https://github.com/arc53/DocsGPT.git` -2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your OpenAI API key and `VITE_API_STREAMING` to true or false, depending on if you want streaming answers or not. +2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys) and `VITE_API_STREAMING` to true or false, depending on if you want streaming answers or not. It should look like this inside: ``` API_KEY=Yourkey VITE_API_STREAMING=true ``` - See optional environment variables in the `/.env-template` and `/application/.env_sample` files. -3. Run `./run-with-docker-compose.sh`. + See optional environment variables in the [/.env-template](https://github.com/arc53/DocsGPT/blob/main/.env-template) and [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) files. +3. Run [./run-with-docker-compose.sh](https://github.com/arc53/DocsGPT/blob/main/run-with-docker-compose.sh). 4. Navigate to http://localhost:5173/. To stop, just run `Ctrl + C`. @@ -105,7 +108,7 @@ To stop, just run `Ctrl + C`. ## Development environments ### Spin up mongo and redis -For development, only two containers are used from `docker-compose.yaml` (by deleting all services except for Redis and Mongo). +For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo). See file [docker-compose-dev.yaml](./docker-compose-dev.yaml). Run @@ -119,7 +122,7 @@ docker compose -f docker-compose-dev.yaml up -d Make sure you have Python 3.10 or 3.11 installed. 1. Export required environment variables or prepare a `.env` file in the `/application` folder: - - Copy `.env_sample` and create `.env` with your OpenAI API token for the `API_KEY` and `EMBEDDINGS_KEY` fields. + - Copy [.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) and create `.env` with your OpenAI API token for the `API_KEY` and `EMBEDDINGS_KEY` fields. (check out [`application/core/settings.py`](application/core/settings.py) if you want to see more config options.) @@ -137,9 +140,9 @@ python -m venv venv venv/Scripts/activate ``` -3. Change to the `application/` subdir and install dependencies for the backend: +3. Change to the `application/` subdir by the command `cd application/` and install dependencies for the backend: ```commandline -pip install -r application/requirements.txt +pip install -r requirements.txt ``` 4. Run the app using `flask run --host=0.0.0.0 --port=7091`. 5. Start worker with `celery -A application.app.celery worker -l INFO`. @@ -148,9 +151,14 @@ pip install -r application/requirements.txt Make sure you have Node version 16 or higher. -1. Navigate to the `/frontend` folder. -2. Install dependencies by running `npm install`. -3. Run the app using `npm run dev`. +1. Navigate to the [/frontend](https://github.com/arc53/DocsGPT/tree/main/frontend) folder. +2. Install required packages `husky` and `vite` (ignore if installed). +```commandline +npm install husky -g +npm install vite -g +``` +3. Install dependencies by running `npm install --include=dev`. +4. Run the app using `npm run dev`. ## Contributing @@ -166,6 +174,6 @@ We as members, contributors, and leaders, pledge to make participation in our co ## License -The source code license is MIT, as described in the LICENSE file. +The source code license is [MIT](https://opensource.org/license/mit/), as described in the [LICENSE](LICENSE) file. Built with [πŸ¦œοΈπŸ”— LangChain](https://github.com/hwchase17/langchain) diff --git a/application/api/user/routes.py b/application/api/user/routes.py index 02a0876..fdff2e9 100644 --- a/application/api/user/routes.py +++ b/application/api/user/routes.py @@ -53,6 +53,15 @@ def get_single_conversation(): conversation = conversations_collection.find_one({"_id": ObjectId(conversation_id)}) return jsonify(conversation['queries']) +@user.route("/api/update_conversation_name", methods=["POST"]) +def update_conversation_name(): + # update data for a conversation + data = request.get_json() + id = data["id"] + name = data["name"] + conversations_collection.update_one({"_id": ObjectId(id)},{"$set":{"name":name}}) + return {"status": "ok"} + @user.route("/api/feedback", methods=["POST"]) def api_feedback(): diff --git a/application/worker.py b/application/worker.py index 5c87c70..71fcd61 100644 --- a/application/worker.py +++ b/application/worker.py @@ -21,8 +21,7 @@ except FileExistsError: def metadata_from_filename(title): - store = title.split('/') - store = store[1] + '/' + store[2] + store = '/'.join(title.split('/')[1:3]) return {'title': title, 'store': store} diff --git a/docs/pages/Deploying/Hosting-the-app.md b/docs/pages/Deploying/Hosting-the-app.md index 7505f60..13296b4 100644 --- a/docs/pages/Deploying/Hosting-the-app.md +++ b/docs/pages/Deploying/Hosting-the-app.md @@ -18,7 +18,7 @@ After that, it is time to pick your Instance Image. We recommend using "Linux/Un As for instance plan, it'll vary depending on your unique demands, but a "1 GB, 1vCPU, 40GB SSD and 2TB transfer" setup should cover most scenarios. -Lastly, Identify your instance by giving it a unique name and then hit "Create instance". +Lastly, identify your instance by giving it a unique name and then hit "Create instance". PS: Once you create your instance, it'll likely take a few minutes for the setup to be completed. @@ -42,7 +42,7 @@ A terminal window will pop up, and the first step will be to clone the DocsGPT g #### Download the package information -Once it has finished cloning the repository, it is time to download the package information from all sources. To do so simply enter the following command: +Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command: `sudo apt update` @@ -64,7 +64,7 @@ Enter the following command to access the folder in which DocsGPT docker-compose #### Prepare the environment -Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it. +Inside the DocsGPT folder, create a `.env` file and copy the contents of `.env_sample` into it. `nano .env` @@ -95,7 +95,7 @@ You're almost there! Now that all the necessary bits and pieces have been instal Launching it for the first time will take a few minutes to download all the necessary dependencies and build. -Once this is done you can go ahead and close the terminal window. +Once this is done, you can go ahead and close the terminal window. #### Enabling ports diff --git a/docs/pages/Deploying/Quickstart.md b/docs/pages/Deploying/Quickstart.md index 2cc03c5..5ed37a5 100644 --- a/docs/pages/Deploying/Quickstart.md +++ b/docs/pages/Deploying/Quickstart.md @@ -1,7 +1,7 @@ ## Launching Web App Note: Make sure you have Docker installed -On Mac OS or Linux just write: +On macOS or Linux, just write: `./setup.sh` @@ -10,11 +10,11 @@ It will install all the dependencies and give you an option to download the loca Otherwise, refer to this Guide: 1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`. -2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI api key](https://platform.openai.com/account/api-keys). +2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys). 3. Run `docker-compose build && docker-compose up`. 4. Navigate to `http://localhost:5173/`. -To stop just run `Ctrl + C`. +To stop, just run `Ctrl + C`. ### Chrome Extension diff --git a/docs/pages/Developing/API-docs.md b/docs/pages/Developing/API-docs.md index f1413a3..2d83284 100644 --- a/docs/pages/Developing/API-docs.md +++ b/docs/pages/Developing/API-docs.md @@ -18,7 +18,7 @@ fetch("http://127.0.0.1:5000/api/answer", { .then(console.log.bind(console)) ``` -In response you will get a json document like this one: +In response, you will get a JSON document like this one: ```json { @@ -30,7 +30,7 @@ In response you will get a json document like this one: ### /api/docs_check It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)). -It's a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example: +It's a POST request that sends a JSON in a body with 1 value. Here is a JavaScript fetch example: ```js // answer (POST http://127.0.0.1:5000/api/docs_check) @@ -45,7 +45,7 @@ fetch("http://127.0.0.1:5000/api/docs_check", { .then(console.log.bind(console)) ``` -In response you will get a json document like this one: +In response, you will get a JSON document like this one: ```json { "status": "exists" @@ -54,17 +54,17 @@ In response you will get a json document like this one: ### /api/combine -Provides json that tells UI which vectors are available and where they are located with a simple get request. +Provides JSON that tells UI which vectors are available and where they are located with a simple get request. Response will include: `date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`. -Example of json in Docshub and local: +Example of JSON in Docshub and local: image ### /api/upload -Uploads file that needs to be trained, response is json with task id, which can be used to check on tasks progress +Uploads file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress HTML example: ```html @@ -104,7 +104,9 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f Responses: There are two types of responses: + 1. While the task is still running, the 'current' value will show progress from 0 to 100. + ```json { "result": { diff --git a/docs/pages/Extensions/Chatwoot-extension.md b/docs/pages/Extensions/Chatwoot-extension.md index 4dd5782..e95891a 100644 --- a/docs/pages/Extensions/Chatwoot-extension.md +++ b/docs/pages/Extensions/Chatwoot-extension.md @@ -13,7 +13,7 @@ chatwoot_token= 5. Start with `flask run` command. -If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation. +If you want for bot to stop responding to questions for a specific user or session, just add a label `human-requested` in your conversation. ### Optional (extra validation) @@ -26,4 +26,4 @@ account_id=(optional) 1 assignee_id=(optional) 1 ``` -Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user. \ No newline at end of file +Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user. diff --git a/docs/pages/Extensions/react-widget.md b/docs/pages/Extensions/react-widget.md index be4d6bd..1cc1132 100644 --- a/docs/pages/Extensions/react-widget.md +++ b/docs/pages/Extensions/react-widget.md @@ -4,7 +4,7 @@ Got to your project and install a new dependency: `npm install docsgpt`. ### Usage -Go to your project and in the file where you want to use the widget import it: +Go to your project and in the file where you want to use the widget, import it: ```js import { DocsGPTWidget } from "docsgpt"; import "docsgpt/dist/style.css"; @@ -14,12 +14,12 @@ import "docsgpt/dist/style.css"; Then you can use it like this: `` DocsGPTWidget takes 3 props: -- `apiHost` β€” url of your DocsGPT API. -- `selectDocs` β€” documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`). -- `apiKey` β€” usually its empty. +- `apiHost` β€” URL of your DocsGPT API. +- `selectDocs` β€” documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`). +- `apiKey` β€” usually it's empty. ### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX) -Install you widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content: +Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content: ```js import { DocsGPTWidget } from "docsgpt"; import "docsgpt/dist/style.css"; diff --git a/docs/pages/Guides/Customising-prompts.md b/docs/pages/Guides/Customising-prompts.md index 1d3a7d4..19dcdef 100644 --- a/docs/pages/Guides/Customising-prompts.md +++ b/docs/pages/Guides/Customising-prompts.md @@ -1,4 +1,4 @@ -## To customize a main prompt navigate to `/application/prompt/combine_prompt.txt` +## To customize a main prompt, navigate to `/application/prompt/combine_prompt.txt` You can try editing it to see how the model responses. diff --git a/docs/pages/Guides/How-to-train-on-other-documentation.md b/docs/pages/Guides/How-to-train-on-other-documentation.md index 9f4e503..2e8e4af 100644 --- a/docs/pages/Guides/How-to-train-on-other-documentation.md +++ b/docs/pages/Guides/How-to-train-on-other-documentation.md @@ -5,18 +5,18 @@ This AI can use any documentation, but first it needs to be prepared for similar Start by going to `/scripts/` folder. -If you open this file you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`. +If you open this file, you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`. -It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$. +It currently uses OPEN_AI to create the vector store, so make sure your documentation is not too big. Pandas cost me around $3-$4. -You can usually find documentation on github in `docs/` folder for most open-source projects. +You can usually find documentation on Github in `docs/` folder for most open-source projects. ### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory -Name it `inputs/` -Put all your .rst/.md files in there -The search is recursive, so you don't need to flatten them +- Name it `inputs/` +- Put all your .rst/.md files in there +- The search is recursive, so you don't need to flatten them -If there are no .rst/.md files just convert whatever you find to txt and feed it. (don't forget to change the extension in script) +If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script) ### 2. Create .env file in `scripts/` folder And write your OpenAI API key inside @@ -32,7 +32,7 @@ It will tell you how much it will cost ### 5. Run web app -Once you run it will use new context that is relevant to your documentation +Once you run it will use new context that is relevant to your documentation Make sure you select default in the dropdown in the UI ## Customization @@ -41,7 +41,7 @@ You can learn more about options while running ingest.py by running: `python ingest.py --help` | Options | | |:--------------------------------:|:------------------------------------------------------------------------------------------------------------------------------:| -| **ingest** | Runs 'ingest' function converting documentation to Faiss plus Index format | +| **ingest** | Runs 'ingest' function, converting documentation to Faiss plus Index format | | --dir TEXT | List of paths to directory for index creation. E.g. --dir inputs --dir inputs2 [default: inputs] | | --file TEXT | File paths to use (Optional; overrides directory) E.g. --files inputs/1.md --files inputs/2.md | | --recursive / --no-recursive | Whether to recursively search in subdirectories [default: recursive] | @@ -56,4 +56,4 @@ You can learn more about options while running ingest.py by running: | | | | **convert** | Creates documentation in .md format from source code | | --dir TEXT | Path to a directory with source code. E.g. --dir inputs [default: inputs] | -| --formats TEXT | Source code language from which to create documentation. Supports py, js and java. E.g. --formats py [default: py] | \ No newline at end of file +| --formats TEXT | Source code language from which to create documentation. Supports py, js and java. E.g. --formats py [default: py] | diff --git a/docs/pages/Guides/How-to-use-different-LLM.md b/docs/pages/Guides/How-to-use-different-LLM.md index aa5815f..0eaf483 100644 --- a/docs/pages/Guides/How-to-use-different-LLM.md +++ b/docs/pages/Guides/How-to-use-different-LLM.md @@ -1,4 +1,4 @@ -Fortunately there are many providers for LLM's and some of them can even be ran locally +Fortunately, there are many providers for LLM's and some of them can even be run locally There are two models used in the app: 1. Embeddings. @@ -21,12 +21,16 @@ By default, we use OpenAI's models but if you want to change it or even run it l You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`. Options: -LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon) +LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon, llama.cpp) EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium) +If using Llama, set the `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2` and be sure to download [this model](https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf) into the `models/` folder: `https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf`. + +Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choose option 1 when prompted. You do not need to manually add the DocsGPT model mentioned above to your `models/` folder if you use `setup.sh`, as the script will manage that step for you. + That's it! ### Hosting everything locally and privately (for using our optimised open-source models) If you are working with important data and don't want anything to leave your premises. -Make sure you set `SELF_HOSTED_MODEL` as true in you `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face. +Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face. diff --git a/docs/pages/Guides/My-AI-answers-questions-using-external-knowledge.md b/docs/pages/Guides/My-AI-answers-questions-using-external-knowledge.md index be1bffa..a546116 100644 --- a/docs/pages/Guides/My-AI-answers-questions-using-external-knowledge.md +++ b/docs/pages/Guides/My-AI-answers-questions-using-external-knowledge.md @@ -1,10 +1,10 @@ -If your AI uses external knowledge and is not explicit enough it is ok, because we try to make docsgpt friendly. +If your AI uses external knowledge and is not explicit enough, it is ok, because we try to make DocsGPT friendly. -But if you want to adjust it, here is a simple way. +But if you want to adjust it, here is a simple way:- -Got to `application/prompts/chat_combine_prompt.txt` +- Got to `application/prompts/chat_combine_prompt.txt` -And change it to +- And change it to ``` diff --git a/frontend/.env.development b/frontend/.env.development index 2bb6711..7a87f76 100644 --- a/frontend/.env.development +++ b/frontend/.env.development @@ -1,3 +1,3 @@ # Please put appropriate value -VITE_API_HOST=http://localhost:7091 +VITE_API_HOST=http://0.0.0.0:7091 VITE_API_STREAMING=true \ No newline at end of file diff --git a/frontend/package-lock.json b/frontend/package-lock.json index 8cf969d..7c08dfb 100644 --- a/frontend/package-lock.json +++ b/frontend/package-lock.json @@ -11,6 +11,7 @@ "@reduxjs/toolkit": "^1.9.2", "@vercel/analytics": "^0.1.10", "react": "^18.2.0", + "react-copy-to-clipboard": "^5.1.0", "react-dom": "^18.2.0", "react-dropzone": "^14.2.3", "react-markdown": "^8.0.7", @@ -2248,6 +2249,14 @@ "integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==", "dev": true }, + "node_modules/copy-to-clipboard": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/copy-to-clipboard/-/copy-to-clipboard-3.3.3.tgz", + "integrity": "sha512-2KV8NhB5JqC3ky0r9PMCAZKbUHSwtEo4CwCs0KXgruG43gX5PMqDEBbVU4OUzw2MuAWUfsuFmWvEKG5QRfSnJA==", + "dependencies": { + "toggle-selection": "^1.0.6" + } + }, "node_modules/cosmiconfig": { "version": "7.1.0", "resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-7.1.0.tgz", @@ -6072,6 +6081,18 @@ "node": ">=0.10.0" } }, + "node_modules/react-copy-to-clipboard": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/react-copy-to-clipboard/-/react-copy-to-clipboard-5.1.0.tgz", + "integrity": "sha512-k61RsNgAayIJNoy9yDsYzDe/yAZAzEbEgcz3DZMhF686LEyukcE1hzurxe85JandPUG+yTfGVFzuEw3xt8WP/A==", + "dependencies": { + "copy-to-clipboard": "^3.3.1", + "prop-types": "^15.8.1" + }, + "peerDependencies": { + "react": "^15.3.0 || 16 || 17 || 18" + } + }, "node_modules/react-dom": { "version": "18.2.0", "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-18.2.0.tgz", @@ -6907,6 +6928,11 @@ "node": ">=8.0" } }, + "node_modules/toggle-selection": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/toggle-selection/-/toggle-selection-1.0.6.tgz", + "integrity": "sha512-BiZS+C1OS8g/q2RRbJmy59xpyghNBqrr6k5L/uKBGRsTfxmu3ffiRnd8mlGPUVayg8pvfi5urfnu8TU7DVOkLQ==" + }, "node_modules/trim-lines": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/trim-lines/-/trim-lines-3.0.1.tgz", diff --git a/frontend/package.json b/frontend/package.json index 66d14dd..9dcbf4a 100644 --- a/frontend/package.json +++ b/frontend/package.json @@ -22,6 +22,7 @@ "@reduxjs/toolkit": "^1.9.2", "@vercel/analytics": "^0.1.10", "react": "^18.2.0", + "react-copy-to-clipboard": "^5.1.0", "react-dom": "^18.2.0", "react-dropzone": "^14.2.3", "react-markdown": "^8.0.7", diff --git a/frontend/src/About.tsx b/frontend/src/About.tsx index 303708c..fe26835 100644 --- a/frontend/src/About.tsx +++ b/frontend/src/About.tsx @@ -4,7 +4,7 @@ export default function About() { return (
-
+

About DocsGPT

πŸ¦–

@@ -51,9 +51,11 @@ export default function About() {

- Currently It uses DocsGPT documentation, so it will respond to - information relevant to DocsGPT . If you want to train it on different - documentation - please follow + Currently It uses{' '} + DocsGPT{' '} + documentation, so it will respond to information relevant to{' '} + DocsGPT . If you + want to train it on different documentation - please follow diff --git a/frontend/src/Hero.tsx b/frontend/src/Hero.tsx index df0ad28..0644da6 100644 --- a/frontend/src/Hero.tsx +++ b/frontend/src/Hero.tsx @@ -5,26 +5,28 @@ export default function Hero({ className = '' }: { className?: string }) {

DocsGPT

πŸ¦–

-

+

Welcome to DocsGPT, your technical documentation assistant!

-

+

Enter a query related to the information in the documentation you - selected to receive and we will provide you with the most relevant - answers. + selected to receive +
and we will provide you with the most relevant answers.

-

+

Start by entering your query in the input field below and we will do the rest!

-
-
-
- lock -

- Chat with Your Data -

-

+

+
+
+ lock +

Chat with Your Data

+

DocsGPT will use your data to answer questions. Whether its documentation, source code, or Microsoft files, DocsGPT allows you to have interactive conversations and find answers based on the @@ -33,13 +35,11 @@ export default function Hero({ className = '' }: { className?: string }) {

-
-
- lock -

- Secure Data Storage -

-

+

+
+ lock +

Secure Data Storage

+

The security of your data is our top priority. DocsGPT ensures the utmost protection for your sensitive information. With secure data storage and privacy measures in place, you can trust that your @@ -47,17 +47,15 @@ export default function Hero({ className = '' }: { className?: string }) {

-
-
+
+
lock -

- Open Source Code -

-

+

Open Source Code

+

DocsGPT is built on open source principles, promoting transparency and collaboration. The source code is freely available, enabling developers to contribute, enhance, and customize the app to meet diff --git a/frontend/src/Navigation.tsx b/frontend/src/Navigation.tsx index e6c5bb6..e236b8a 100644 --- a/frontend/src/Navigation.tsx +++ b/frontend/src/Navigation.tsx @@ -32,6 +32,7 @@ import { useMediaQuery, useOutsideAlerter } from './hooks'; import Upload from './upload/Upload'; import { Doc, getConversations } from './preferences/preferenceApi'; import SelectDocsModal from './preferences/SelectDocsModal'; +import ConversationTile from './conversation/ConversationTile'; interface NavigationProps { navOpen: boolean; @@ -68,27 +69,26 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) { useEffect(() => { if (!conversations) { - getConversations() - .then((fetchedConversations) => { - dispatch(setConversations(fetchedConversations)); - }) - .catch((error) => { - console.error('Failed to fetch conversations: ', error); - }); + fetchConversations(); } }, [conversations, dispatch]); + async function fetchConversations() { + return await getConversations() + .then((fetchedConversations) => { + dispatch(setConversations(fetchedConversations)); + }) + .catch((error) => { + console.error('Failed to fetch conversations: ', error); + }); + } + const handleDeleteConversation = (id: string) => { fetch(`${apiHost}/api/delete_conversation?id=${id}`, { method: 'POST', }) .then(() => { - // remove the image element from the DOM - const imageElement = document.querySelector( - `#img-${id}`, - ) as HTMLElement; - const parentElement = imageElement.parentNode as HTMLElement; - parentElement.parentNode?.removeChild(parentElement); + fetchConversations(); }) .catch((error) => console.error(error)); }; @@ -126,6 +126,29 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) { ); }); }; + + async function updateConversationName(updatedConversation: { + name: string; + id: string; + }) { + await fetch(`${apiHost}/api/update_conversation_name`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify(updatedConversation), + }) + .then((response) => response.json()) + .then((data) => { + if (data) { + navigate('/'); + fetchConversations(); + } + }) + .catch((err) => { + console.error(err); + }); + } useOutsideAlerter( navRef, () => { @@ -210,41 +233,17 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {

{conversations - ? conversations.map((conversation) => { - return ( -
{ - handleConversationClick(conversation.id); - }} - className={`my-auto mx-4 mt-4 flex h-12 cursor-pointer items-center justify-between gap-4 rounded-3xl hover:bg-gray-100 ${ - conversationId === conversation.id ? 'bg-gray-100' : '' - }`} - > -
- -

- {conversation.name.length > 45 - ? conversation.name.substring(0, 45) + '...' - : conversation.name} -

-
- - {conversationId === conversation.id ? ( - Exit { - event.stopPropagation(); - handleDeleteConversation(conversation.id); - }} - /> - ) : null} -
- ); - }) + ? conversations.map((conversation) => ( + handleConversationClick(id)} + onDeleteConversation={(id) => handleDeleteConversation(id)} + onSave={(conversation) => + updateConversationName(conversation) + } + /> + )) : null}
@@ -252,7 +251,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
setIsDocsListOpen(!isDocsListOpen)} > {selectedDocs && ( diff --git a/frontend/src/assets/checkMark.svg b/frontend/src/assets/checkMark.svg new file mode 100644 index 0000000..9ed02cb --- /dev/null +++ b/frontend/src/assets/checkMark.svg @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/frontend/src/assets/checkmark.svg b/frontend/src/assets/checkmark.svg new file mode 100644 index 0000000..9ed02cb --- /dev/null +++ b/frontend/src/assets/checkmark.svg @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/frontend/src/assets/copy.svg b/frontend/src/assets/copy.svg new file mode 100644 index 0000000..846d285 --- /dev/null +++ b/frontend/src/assets/copy.svg @@ -0,0 +1,3 @@ + + + diff --git a/frontend/src/assets/edit.svg b/frontend/src/assets/edit.svg new file mode 100644 index 0000000..2565377 --- /dev/null +++ b/frontend/src/assets/edit.svg @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/frontend/src/assets/trash.svg b/frontend/src/assets/trash.svg new file mode 100644 index 0000000..d0e4546 --- /dev/null +++ b/frontend/src/assets/trash.svg @@ -0,0 +1,3 @@ + + + \ No newline at end of file diff --git a/frontend/src/conversation/Conversation.tsx b/frontend/src/conversation/Conversation.tsx index ab43576..87a5ebb 100644 --- a/frontend/src/conversation/Conversation.tsx +++ b/frontend/src/conversation/Conversation.tsx @@ -29,6 +29,13 @@ export default function Conversation() { scrollIntoView(); }, [queries.length, queries[queries.length - 1]]); + useEffect(() => { + const element = document.getElementById('inputbox') as HTMLInputElement; + if (element) { + element.focus(); + } + }, []); + useEffect(() => { const observerCallback: IntersectionObserverCallback = (entries) => { entries.forEach((entry) => { @@ -81,7 +88,7 @@ export default function Conversation() { responseView = ( )} -
+
{ if (e.key === 'Enter' && !e.shiftKey) { e.preventDefault(); diff --git a/frontend/src/conversation/ConversationBubble.tsx b/frontend/src/conversation/ConversationBubble.tsx index b74c65f..aba7c63 100644 --- a/frontend/src/conversation/ConversationBubble.tsx +++ b/frontend/src/conversation/ConversationBubble.tsx @@ -4,7 +4,10 @@ import { FEEDBACK, MESSAGE_TYPE } from './conversationModels'; import Alert from './../assets/alert.svg'; import { ReactComponent as Like } from './../assets/like.svg'; import { ReactComponent as Dislike } from './../assets/dislike.svg'; +import { ReactComponent as Copy } from './../assets/copy.svg'; +import { ReactComponent as Checkmark } from './../assets/checkmark.svg'; import ReactMarkdown from 'react-markdown'; +import copy from 'copy-to-clipboard'; import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter'; import { vscDarkPlus } from 'react-syntax-highlighter/dist/cjs/styles/prism'; @@ -26,6 +29,17 @@ const ConversationBubble = forwardRef< ) { const [showFeedback, setShowFeedback] = useState(false); const [openSource, setOpenSource] = useState(null); + const [copied, setCopied] = useState(false); + + const handleCopyClick = (text: string) => { + copy(text); + setCopied(true); + // Reset copied to false after a few seconds + setTimeout(() => { + setCopied(false); + }, 2000); + }; + const List = ({ ordered, children, @@ -62,15 +76,15 @@ const ConversationBubble = forwardRef<
{type === 'ERROR' && ( alert )} {DisableSourceFE || type === 'ERROR' ? null : ( - - )} -
- {DisableSourceFE || type === 'ERROR' ? null : ( -
- Sources: -
- )} -
- {DisableSourceFE - ? null - : sources?.map((source, index) => ( + <> + +
+
+ Sources: +
+
+ {sources?.map((source, index) => (
))} -
-
+
+
+ + )} +
+
+ {copied ? ( + + ) : ( + { + handleCopyClick(message); + }} + > + )}
void; + onDeleteConversation: (arg1: string) => void; + onSave: ({ name, id }: ConversationProps) => void; +} + +export default function ConversationTile({ + conversation, + selectConversation, + onDeleteConversation, + onSave, +}: ConversationTileProps) { + const conversationId = useSelector(selectConversationId); + const tileRef = useRef(null); + + const [isEdit, setIsEdit] = useState(false); + const [conversationName, setConversationsName] = useState(''); + useOutsideAlerter( + tileRef, + () => + handleSaveConversation({ + id: conversationId || conversation.id, + name: conversationName, + }), + [conversationName], + ); + + useEffect(() => { + setConversationsName(conversation.name); + }, [conversation.name]); + + function handleEditConversation() { + setIsEdit(true); + } + + function handleSaveConversation(changedConversation: ConversationProps) { + if (changedConversation.name.trim().length) { + onSave(changedConversation); + setIsEdit(false); + } else { + onClear(); + } + } + + function onClear() { + setConversationsName(conversation.name); + setIsEdit(false); + } + return ( +
{ + selectConversation(conversation.id); + }} + className={`my-auto mx-4 mt-4 flex h-12 cursor-pointer items-center justify-between gap-4 rounded-3xl hover:bg-gray-100 ${ + conversationId === conversation.id ? 'bg-gray-100' : '' + }`} + > +
+ + {isEdit ? ( + setConversationsName(e.target.value)} + /> + ) : ( +

+ {conversationName} +

+ )} +
+ {conversationId === conversation.id ? ( +
+ Edit { + event.stopPropagation(); + isEdit + ? handleSaveConversation({ + id: conversationId, + name: conversationName, + }) + : handleEditConversation(); + }} + /> + Exit { + event.stopPropagation(); + isEdit ? onClear() : onDeleteConversation(conversation.id); + }} + /> +
+ ) : null} +
+ ); +} diff --git a/frontend/tailwind.config.cjs b/frontend/tailwind.config.cjs index 8e395a0..b76b022 100644 --- a/frontend/tailwind.config.cjs +++ b/frontend/tailwind.config.cjs @@ -1,6 +1,7 @@ /** @type {import('tailwindcss').Config} */ module.exports = { content: ['./index.html', './src/**/*.{js,ts,jsx,tsx}'], + darkMode: 'class', theme: { extend: { spacing: { diff --git a/scripts/ingest.py b/scripts/ingest.py index 6ab9cce..8c74fd0 100644 --- a/scripts/ingest.py +++ b/scripts/ingest.py @@ -78,14 +78,12 @@ def ingest(yes: bool = typer.Option(False, "-y", "--yes", prompt=False, # Here we check for command line arguments for bot calls. # If no argument exists or the yes is not True, then the # user permission is requested to call the API. - if len(sys.argv) > 1: - if yes: - call_openai_api(docs, folder_name) - else: - get_user_permission(docs, folder_name) + if len(sys.argv) > 1 and yes: + call_openai_api(docs, folder_name) else: get_user_permission(docs, folder_name) + folder_counts = defaultdict(int) folder_names = [] for dir_path in dir: @@ -110,14 +108,19 @@ def convert(dir: Optional[str] = typer.Option("inputs", Creates documentation linked to original functions from specified location. By default /inputs folder is used, .py is parsed. """ - if formats == 'py': - functions_dict, classes_dict = extract_py(dir) - elif formats == 'js': - functions_dict, classes_dict = extract_js(dir) - elif formats == 'java': - functions_dict, classes_dict = extract_java(dir) + # Using a dictionary to map between the formats and their respective extraction functions + # makes the code more scalable. When adding more formats in the future, + # you only need to update the extraction_functions dictionary. + extraction_functions = { + 'py': extract_py, + 'js': extract_js, + 'java': extract_java + } + + if formats in extraction_functions: + functions_dict, classes_dict = extraction_functions[formats](dir) else: - raise Exception("Sorry, language not supported yet") + raise Exception("Sorry, language not supported yet") transform_to_docs(functions_dict, classes_dict, formats, dir)