Merge pull request #849 from arc53/main

Sync
pull/850/head
Alex 4 months ago committed by GitHub
commit 2e14dec12d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -1,4 +1,5 @@
API_KEY=<LLM api key (for example, open ai key)>
LLM_NAME=docsgpt
VITE_API_STREAMING=true
#For Azure (you can delete it if you don't use Azure)

@ -1,5 +1,5 @@
organization: arc53
defaultSticker: cln9dm7qz164460gk5ksrgr034
defaultSticker: clqmdf0ed34290glbvqh0kzxd
stickers:
- id: cln9dm7qz164460gk5ksrgr034
alias: hacktober
- id: clqmdf0ed34290glbvqh0kzxd
alias: festive

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

@ -2,58 +2,58 @@
## Our Pledge
We as members, contributors, and leaders, pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
We as members, contributors and leaders pledge to make participation in our
community, a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
nationality, personal appearance, race, religion or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and a healthy community.
diverse, inclusive and a healthy community.
## Our Standards
Examples of behavior that contribute to a positive environment for our
community include:
* Demonstrating empathy and kindness towards other people
* Being respectful and open to differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Taking accountability and offering apologies to those who have been impacted by our errors,
## Demonstrating empathy and kindness towards other people
1. Being respectful and open to differing opinions, viewpoints, and experiences
2. Giving and gracefully accepting constructive feedback
3. Taking accountability and offering apologies to those who have been impacted by our errors,
while also gaining insights from the situation
* Focusing on what is best not just for us as individuals, but for the
4. Focusing on what is best not just for us as individuals but for the
community as a whole
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
1. The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing other's private information, such as a physical or email
2. Trolling, insulting or derogatory comments, and personal or political attacks
3. Public or private harassment
4. Publishing other's private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
5. Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
response to any behavior that they deem inappropriate, threatening, offensive
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
not aligned to this Code of Conduct and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
This Code of Conduct applies within all community spaces and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
posting via an official social media account or acting as an appointed
representative at an online or offline event.
## Enforcement
@ -63,29 +63,27 @@ reported to the community leaders responsible for enforcement at
contact@arc53.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
All community leaders are obligated to be respectful towards the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
the consequences for any action that they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
* **Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community space.
**Consequence**: A private, written warning from community leaders, providing
* **Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
* **Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
* **Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
@ -93,23 +91,21 @@ like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
* **Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
* **Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
* **Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior,harassment of an
individual or aggression towards or disparagement of classes of individuals.
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression towards or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
* **Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution

@ -2,8 +2,6 @@
Thank you for choosing to contribute to DocsGPT! We are all very grateful!
### [🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
# We accept different types of contributions
📣 **Discussions** - Engage in conversations, start new topics, or help answer questions.
@ -17,21 +15,33 @@ Thank you for choosing to contribute to DocsGPT! We are all very grateful!
## 🐞 Issues and Pull requests
We value contributions in the form of discussions or suggestions. We recommend taking a look at existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2).
- We value contributions in the form of discussions or suggestions. We recommend taking a look at existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2).
- If you're interested in contributing code, here are some important things to know:
- We have a frontend built on React (Vite) and a backend in Python.
=======
Before creating issues, please check out how the latest version of our app looks and works by launching it via [Quickstart](https://github.com/arc53/DocsGPT#quickstart) the version on our live demo is slightly modified with login. Your issues should relate to the version that you can launch via [Quickstart](https://github.com/arc53/DocsGPT#quickstart).
### 👨‍💻 If you're interested in contributing code, here are some important things to know:
If you're interested in contributing code, here are some important things to know:
Tech Stack Overview:
We have a frontend built with React (Vite) and a backend in Python.
- 🌐 Frontend: Built with React (Vite) ⚛️,
### If you are looking to contribute to frontend (⚛React, Vite):
- 🖥 Backend: Developed in Python 🐍
### 🌐 If you are looking to contribute to frontend (⚛React, Vite):
- The current frontend is being migrated from [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) to [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) with a new design, so please contribute to the new one.
- Check out this [milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues.
- The Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
- The updated Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
Please try to follow the guidelines.
### If you are looking to contribute to Backend (🐍 Python):
### 🖥 If you are looking to contribute to Backend (🐍 Python):
- Review our issues and contribute to [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts) (please disregard old [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py) files; they will be deprecated soon).
- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
@ -44,9 +54,75 @@ To run unit tests from the root of the repository, execute:
python -m pytest
```
### Workflow:
Fork the repository, make your changes on your forked version, and then submit those changes as a pull request.
## Workflow 📈
Here's a step-by-step guide on how to contribute to DocsGPT:
1. **Fork the Repository:**
- Click the "Fork" button at the top-right of this repository to create your fork.
2. **Clone the Forked Repository:**
- Clone the repository using:
``` shell
git clone https://github.com/<your-github-username>/DocsGPT.git
```
3. **Keep your Fork in Sync:**
- Before you make any changes, make sure that your fork is in sync to avoid merge conflicts using:
```shell
git remote add upstream https://github.com/arc53/DocsGPT.git
git pull upstream main
```
4. **Create and Switch to a New Branch:**
- Create a new branch for your contribution using:
```shell
git checkout -b your-branch-name
```
5. **Make Changes:**
- Make the required changes in your branch.
6. **Add Changes to the Staging Area:**
- Add your changes to the staging area using:
```shell
git add .
```
7. **Commit Your Changes:**
- Commit your changes with a descriptive commit message using:
```shell
git commit -m "Your descriptive commit message"
```
8. **Push Your Changes to the Remote Repository:**
- Push your branch with changes to your fork on GitHub using:
```shell
git push origin your-branch-name
```
9. **Submit a Pull Request (PR):**
- Create a Pull Request from your branch to the main repository. Make sure to include a detailed description of your changes and reference any related issues.
10. **Collaborate:**
- Be responsive to comments and feedback on your PR.
- Make necessary updates as suggested.
- Once your PR is approved, it will be merged into the main repository.
11. **Testing:**
- Before submitting a Pull Request, ensure your code passes all unit tests.
- To run unit tests from the root of the repository, execute:
```shell
python -m pytest
```
*Note: You should run the unit test only after making the changes to the backend code.*
12. **Questions and Collaboration:**
- Feel free to join our Discord. We're very friendly and welcoming to new contributors, so don't hesitate to reach out.
Thank you for considering contributing to DocsGPT! 🙏
## Questions/collaboration
Feel free to join our [Discord](https://discord.gg/n5BX8dh8rU). We're very friendly and welcoming to new contributors, so don't hesitate to reach out.
# Thank you so much for considering contributing to DocsGPT!🙏
# Thank you so much for considering to contribute DocsGPT!🙏

@ -1,35 +0,0 @@
# 🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉
Welcome, contributors! We're excited to announce that DocsGPT is participating in Hacktoberfest. Get involved by submitting a **meaningful** pull request, and earn a free shirt in return!
All contributors with accepted PRs will receive a cool Holopin! 🤩 (Watch out for a reply in your PR to collect it).
📜 Here's How to Contribute:
🛠️ Code: This is the golden ticket! Make meaningful contributions through PRs.
📚 Wiki: Improve our documentation, Create a guide or change existing documentation.
🖥️ Design: Improve the UI/UX or design a new feature.
📝 Guidelines for Pull Requests:
Familiarize yourself with the current contributions and our [Roadmap](https://github.com/orgs/arc53/projects/2).
Deciding to contribute with code? Here are some insights based on the area of your interest:
- Frontend (⚛React, Vite):
- Most of the code is located in [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) folder. You can also check out our React extension in [`/extensions/react-widget`](https://github.com/arc53/DocsGPT/tree/main/extensions/react-widget).
- For design references, here's the [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
- Ensure you adhere to the established guidelines.
- Backend (🐍Python):
- Focus on [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts). However, avoid the files [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) and [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py), as they will soon be deprecated.
- Newly added code should come with relevant unit tests (pytest).
- Refer to the [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder for test suites.
Check out our [Contributing Guidelines](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
Once you have created your PR and our maintainers have merged it, please fill in this [form](https://airtable.com/appfkqFVjB0RpYCJh/shrXXM98xgRsbjO7s).
Feel free to join our Discord server. We're here to help newcomers, so don't hesitate to jump in! [Join us here](https://discord.gg/n5BX8dh8rU).
Thank you very much for considering contributing to DocsGPT during Hacktoberfest! 🙏 Your contributions could earn you a stylish new t-shirt as a token of our appreciation. 🎁 Join us, and let's code together! 🚀

@ -7,173 +7,193 @@
</p>
<p align="left">
<strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
<strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> is a cutting-edge open-source solution that streamlines the process of finding information in the project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
Say goodbye to time-consuming manual searches, and let <strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.
</p>
<div align="center">
<a href="https://github.com/arc53/DocsGPT">![example1](https://img.shields.io/github/stars/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT">![example2](https://img.shields.io/github/forks/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT/blob/main/LICENSE">![example3](https://img.shields.io/github/license/arc53/docsgpt)</a>
<a href="https://discord.gg/n5BX8dh8rU">![example3](https://img.shields.io/discord/1070046503302877216)</a>
<a href="https://github.com/arc53/DocsGPT">![link to main GitHub showing Stars number](https://img.shields.io/github/stars/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT">![link to main GitHub showing Forks number](https://img.shields.io/github/forks/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT/blob/main/LICENSE">![link to license file](https://img.shields.io/github/license/arc53/docsgpt)</a>
<a href="https://discord.gg/n5BX8dh8rU">![link to discord](https://img.shields.io/discord/1070046503302877216)</a>
<a href="https://twitter.com/docsgptai">![X (formerly Twitter) URL](https://img.shields.io/twitter/follow/docsgptai)</a>
</div>
### Production Support / Help for companies:
### Production Support / Help for Companies:
We're eager to provide personalized assistance when deploying your DocsGPT to a live environment.
- [Book Demo 👋](https://cal.com/arc53/docsgpt-demo-b2b)
- [Send Email ✉️](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions)
### [🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
![video-example-of-docs-gpt](https://d3dg1063dc54p9.cloudfront.net/videos/demov3.gif)
- [Book Demo :wave:](https://airtable.com/appdeaL0F1qV8Bl2C/shrrJF1Ll7btCJRbP)
- [Send Email :email:](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions)
![video-example-of-docs-gpt](https://d3dg1063dc54p9.cloudfront.net/videos/demov3.gif)
## Roadmap
You can find our roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
## Our Open-Source models optimized for DocsGPT:
| Name | Base Model | Requirements (or similar) |
|-------------------|------------|----------------------------------------------------------|
| [Docsgpt-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon) | Falcon-7b | 1xA10G gpu |
| [Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b) | llama-2-14b | 2xA10 gpu's |
| [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's |
## Our Open-Source Models Optimized for DocsGPT:
| Name | Base Model | Requirements (or similar) |
| --------------------------------------------------------------------- | ----------- | ------------------------- |
| [Docsgpt-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon) | Falcon-7b | 1xA10G gpu |
| [Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b) | llama-2-14b | 2xA10 gpu's |
| [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's |
If you don't have enough resources to run it, you can use bitsnbytes to quantize.
## Features
![Group 9](https://user-images.githubusercontent.com/17906039/220427472-2644cff4-7666-46a5-819f-fc4a521f63c7.png)
![Main features of DocsGPT showcasing six main features](https://user-images.githubusercontent.com/17906039/220427472-2644cff4-7666-46a5-819f-fc4a521f63c7.png)
## Useful links
## Useful Links
- 🔍🔥 [Live preview](https://docsgpt.arc53.com/)
- 💬🎉[Join our Discord](https://discord.gg/n5BX8dh8rU)
- 📚😎 [Guides](https://docs.docsgpt.co.uk/)
- :mag: :fire: [Live preview](https://docsgpt.arc53.com/)
- 👩‍💻👨‍💻 [Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
- :speech_balloon: :tada: [Join our Discord](https://discord.gg/n5BX8dh8rU)
- 🗂️🚀 [How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation)
- :books: :sunglasses: [Guides](https://docs.docsgpt.co.uk/)
- 🏠🔐 [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM)
- :couple: [Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
- :file_folder: :rocket: [How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation)
- :house: :closed_lock_with_key: [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM)
## Project Structure
## Project structure
- Application - Flask app (main application).
- Extensions - Chrome extension.
- Scripts - Script that creates similarity search index and stores for other libraries.
- Scripts - Script that creates similarity search index for other libraries.
- Frontend - Frontend uses Vite and React.
- Frontend - Frontend uses <a href="https://vitejs.dev/">Vite</a> and <a href="https://react.dev/">React</a>.
## QuickStart
Note: Make sure you have Docker installed
> [!Note]
> Make sure you have [Docker](https://docs.docker.com/engine/install/) installed
On Mac OS or Linux, write:
`./setup.sh`
It will install all the dependencies and allow you to download the local model or use OpenAI.
It will install all the dependencies and allow you to download the local model, use OpenAI or use our LLM API.
Otherwise, refer to this Guide:
1. Download and open this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys) and `VITE_API_STREAMING` to true or false, depending on if you want streaming answers or not.
2. Create a `.env` file in your root directory and set the env variables and `VITE_API_STREAMING` to true or false, depending on whether you want streaming answers or not.
It should look like this inside:
```
API_KEY=Yourkey
LLM_NAME=[docsgpt or openai or others]
VITE_API_STREAMING=true
API_KEY=[if LLM_NAME is openai]
```
See optional environment variables in the [/.env-template](https://github.com/arc53/DocsGPT/blob/main/.env-template) and [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) files.
3. Run [./run-with-docker-compose.sh](https://github.com/arc53/DocsGPT/blob/main/run-with-docker-compose.sh).
4. Navigate to http://localhost:5173/.
To stop, just run `Ctrl + C`.
## Development environments
## Development Environments
### Spin up Mongo and Redis
### Spin up mongo and redis
For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo).
For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo).
See file [docker-compose-dev.yaml](./docker-compose-dev.yaml).
Run
```
docker compose -f docker-compose-dev.yaml build
docker compose -f docker-compose-dev.yaml up -d
```
### Run the backend
### Run the Backend
Make sure you have Python 3.10 or 3.11 installed.
> [!Note]
> Make sure you have Python 3.10 or 3.11 installed.
1. Export required environment variables or prepare a `.env` file in the `/application` folder:
- Copy [.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) and create `.env` with your OpenAI API token for the `API_KEY` and `EMBEDDINGS_KEY` fields.
- Copy [.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) and create `.env`.
(check out [`application/core/settings.py`](application/core/settings.py) if you want to see more config options.)
2. (optional) Create a Python virtual environment:
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments .
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
a) On Mac OS and Linux
```commandline
python -m venv venv
. venv/bin/activate
```
b) On Windows
```commandline
python -m venv venv
venv/Scripts/activate
```
3. Change to the `application/` subdir by the command `cd application/` and install dependencies for the backend:
3. Download embedding model and save it in the `model/` folder:
You can use the script below, or download it manually from [here](https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip), unzip it and save it in the `model/` folder.
```commandline
wget https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip
unzip mpnet-base-v2.zip -d model
rm mpnet-base-v2.zip
4. Change to the `application/` subdir by the command `cd application/` and install dependencies for the backend:
```commandline
pip install -r requirements.txt
```
4. Run the app using `flask run --host=0.0.0.0 --port=7091`.
5. Start worker with `celery -A application.app.celery worker -l INFO`.
### Start frontend
5. Run the app using `flask --app application/app.py run --host=0.0.0.0 --port=7091`.
6. Start worker with `celery -A application.app.celery worker -l INFO`.
Make sure you have Node version 16 or higher.
### Start Frontend
> [!Note]
> Make sure you have Node version 16 or higher.
1. Navigate to the [/frontend](https://github.com/arc53/DocsGPT/tree/main/frontend) folder.
2. Install required packages `husky` and `vite` (ignore if installed).
2. Install the required packages `husky` and `vite` (ignore if already installed).
```commandline
npm install husky -g
npm install vite -g
```
3. Install dependencies by running `npm install --include=dev`.
4. Run the app using `npm run dev`.
## Contributing
Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information about how to get involved. We welcome issues, questions, and pull requests.
Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information about how to get involved. We welcome issues, questions, and pull requests.
## Code Of Conduct
We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more information about contributing.
## Many Thanks To Our Contributors
## Many Thanks To Our Contributors
<a href="[https://github.com/arc53/DocsGPT/graphs/contributors](https://docsgpt.arc53.com/)">
<img src="https://contrib.rocks/image?repo=arc53/DocsGPT" />
<a href="https://github.com/arc53/DocsGPT/graphs/contributors" alt="View Contributors">
<img src="https://contrib.rocks/image?repo=arc53/DocsGPT" alt="Contributors" />
</a>
## License
The source code license is [MIT](https://opensource.org/license/mit/), as described in the [LICENSE](LICENSE) file.
Built with [🦜️🔗 LangChain](https://github.com/hwchase17/langchain)
Built with [:bird: :link: LangChain](https://github.com/hwchase17/langchain)

@ -1,19 +1,25 @@
FROM python:3.10-slim-bullseye as builder
FROM python:3.11-slim-bullseye as builder
# Tiktoken requires Rust toolchain, so build it in a separate stage
RUN apt-get update && apt-get install -y gcc curl
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y && apt-get install --reinstall libc6-dev -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN pip install --upgrade pip && pip install tiktoken==0.3.3
RUN pip install --upgrade pip && pip install tiktoken==0.5.2
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get install -y wget unzip
RUN wget https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip
RUN unzip mpnet-base-v2.zip -d model
RUN rm mpnet-base-v2.zip
FROM python:3.10-slim-bullseye
FROM python:3.11-slim-bullseye
# Copy pre-built packages and binaries from builder stage
COPY --from=builder /usr/local/ /usr/local/
WORKDIR /app
COPY --from=builder /model /app/model
COPY . /app/application
ENV FLASK_APP=app.py
ENV FLASK_DEBUG=true

@ -25,30 +25,30 @@ mongo = MongoClient(settings.MONGO_URI)
db = mongo["docsgpt"]
conversations_collection = db["conversations"]
vectors_collection = db["vectors"]
prompts_collection = db["prompts"]
answer = Blueprint('answer', __name__)
if settings.LLM_NAME == "gpt4":
gpt_model = 'gpt-4'
elif settings.LLM_NAME == "anthropic":
gpt_model = 'claude-2'
else:
gpt_model = 'gpt-3.5-turbo'
# load the prompts
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
with open(os.path.join(current_dir, "prompts", "combine_prompt.txt"), "r") as f:
template = f.read()
with open(os.path.join(current_dir, "prompts", "combine_prompt_hist.txt"), "r") as f:
template_hist = f.read()
with open(os.path.join(current_dir, "prompts", "question_prompt.txt"), "r") as f:
template_quest = f.read()
with open(os.path.join(current_dir, "prompts", "chat_combine_prompt.txt"), "r") as f:
with open(os.path.join(current_dir, "prompts", "chat_combine_default.txt"), "r") as f:
chat_combine_template = f.read()
with open(os.path.join(current_dir, "prompts", "chat_reduce_prompt.txt"), "r") as f:
chat_reduce_template = f.read()
with open(os.path.join(current_dir, "prompts", "chat_combine_creative.txt"), "r") as f:
chat_combine_creative = f.read()
with open(os.path.join(current_dir, "prompts", "chat_combine_strict.txt"), "r") as f:
chat_combine_strict = f.read()
api_key_set = settings.API_KEY is not None
embeddings_key_set = settings.EMBEDDINGS_KEY is not None
@ -77,11 +77,10 @@ def run_async_chain(chain, question, chat_history):
def get_vectorstore(data):
if "active_docs" in data:
if data["active_docs"].split("/")[0] == "local":
if data["active_docs"].split("/")[1] == "default":
if data["active_docs"].split("/")[0] == "default":
vectorstore = ""
else:
vectorstore = "indexes/" + data["active_docs"]
elif data["active_docs"].split("/")[0] == "local":
vectorstore = "indexes/" + data["active_docs"]
else:
vectorstore = "vectors/" + data["active_docs"]
if data["active_docs"] == "default":
@ -92,47 +91,35 @@ def get_vectorstore(data):
return vectorstore
# def get_docsearch(vectorstore, embeddings_key):
# if settings.EMBEDDINGS_NAME == "openai_text-embedding-ada-002":
# if is_azure_configured():
# os.environ["OPENAI_API_TYPE"] = "azure"
# openai_embeddings = OpenAIEmbeddings(model=settings.AZURE_EMBEDDINGS_DEPLOYMENT_NAME)
# else:
# openai_embeddings = OpenAIEmbeddings(openai_api_key=embeddings_key)
# docsearch = FAISS.load_local(vectorstore, openai_embeddings)
# elif settings.EMBEDDINGS_NAME == "huggingface_sentence-transformers/all-mpnet-base-v2":
# docsearch = FAISS.load_local(vectorstore, HuggingFaceHubEmbeddings())
# elif settings.EMBEDDINGS_NAME == "huggingface_hkunlp/instructor-large":
# docsearch = FAISS.load_local(vectorstore, HuggingFaceInstructEmbeddings())
# elif settings.EMBEDDINGS_NAME == "cohere_medium":
# docsearch = FAISS.load_local(vectorstore, CohereEmbeddings(cohere_api_key=embeddings_key))
# return docsearch
def is_azure_configured():
return settings.OPENAI_API_BASE and settings.OPENAI_API_VERSION and settings.AZURE_DEPLOYMENT_NAME
def complete_stream(question, docsearch, chat_history, api_key, conversation_id):
def complete_stream(question, docsearch, chat_history, api_key, prompt_id, conversation_id):
llm = LLMCreator.create_llm(settings.LLM_NAME, api_key=api_key)
if prompt_id == 'default':
prompt = chat_combine_template
elif prompt_id == 'creative':
prompt = chat_combine_creative
elif prompt_id == 'strict':
prompt = chat_combine_strict
else:
prompt = prompts_collection.find_one({"_id": ObjectId(prompt_id)})["content"]
docs = docsearch.search(question, k=2)
if settings.LLM_NAME == "llama.cpp":
docs = [docs[0]]
# join all page_content together with a newline
docs_together = "\n".join([doc.page_content for doc in docs])
p_chat_combine = chat_combine_template.replace("{summaries}", docs_together)
p_chat_combine = prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
source_log_docs = []
for doc in docs:
if doc.metadata:
data = json.dumps({"type": "source", "doc": doc.page_content, "metadata": doc.metadata})
source_log_docs.append({"title": doc.metadata['title'].split('/')[-1], "text": doc.page_content})
else:
data = json.dumps({"type": "source", "doc": doc.page_content})
source_log_docs.append({"title": doc.page_content, "text": doc.page_content})
yield f"data:{data}\n\n"
if len(chat_history) > 1:
tokens_current_history = 0
@ -199,6 +186,10 @@ def stream():
# history to json object from string
history = json.loads(history)
conversation_id = data["conversation_id"]
if 'prompt_id' in data:
prompt_id = data["prompt_id"]
else:
prompt_id = 'default'
# check if active_docs is set
@ -219,6 +210,7 @@ def stream():
return Response(
complete_stream(question, docsearch,
chat_history=history, api_key=api_key,
prompt_id=prompt_id,
conversation_id=conversation_id), mimetype="text/event-stream"
)
@ -241,6 +233,19 @@ def api_answer():
embeddings_key = data["embeddings_key"]
else:
embeddings_key = settings.EMBEDDINGS_KEY
if 'prompt_id' in data:
prompt_id = data["prompt_id"]
else:
prompt_id = 'default'
if prompt_id == 'default':
prompt = chat_combine_template
elif prompt_id == 'creative':
prompt = chat_combine_creative
elif prompt_id == 'strict':
prompt = chat_combine_strict
else:
prompt = prompts_collection.find_one({"_id": ObjectId(prompt_id)})["content"]
# use try and except to check for exception
try:
@ -258,7 +263,7 @@ def api_answer():
docs = docsearch.search(question, k=2)
# join all page_content together with a newline
docs_together = "\n".join([doc.page_content for doc in docs])
p_chat_combine = chat_combine_template.replace("{summaries}", docs_together)
p_chat_combine = prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
source_log_docs = []
for doc in docs:
@ -335,3 +340,35 @@ def api_answer():
traceback.print_exc()
print(str(e))
return bad_request(500, str(e))
@answer.route("/api/search", methods=["POST"])
def api_search():
data = request.get_json()
# get parameter from url question
question = data["question"]
if not embeddings_key_set:
if "embeddings_key" in data:
embeddings_key = data["embeddings_key"]
else:
embeddings_key = settings.EMBEDDINGS_KEY
else:
embeddings_key = settings.EMBEDDINGS_KEY
if "active_docs" in data:
vectorstore = get_vectorstore({"active_docs": data["active_docs"]})
else:
vectorstore = ""
docsearch = VectorCreator.create_vectorstore(settings.VECTOR_STORE, vectorstore, embeddings_key)
docs = docsearch.search(question, k=2)
source_log_docs = []
for doc in docs:
if doc.metadata:
source_log_docs.append({"title": doc.metadata['title'].split('/')[-1], "text": doc.page_content})
else:
source_log_docs.append({"title": doc.page_content, "text": doc.page_content})
#yield f"data:{data}\n\n"
return source_log_docs

@ -1,11 +1,9 @@
import os
from flask import Blueprint, request, jsonify
import requests
import json
from pymongo import MongoClient
from bson.objectid import ObjectId
from werkzeug.utils import secure_filename
import http.client
from application.api.user.tasks import ingest
@ -16,6 +14,8 @@ mongo = MongoClient(settings.MONGO_URI)
db = mongo["docsgpt"]
conversations_collection = db["conversations"]
vectors_collection = db["vectors"]
prompts_collection = db["prompts"]
feedback_collection = db["feedback"]
user = Blueprint('user', __name__)
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@ -70,20 +70,29 @@ def api_feedback():
answer = data["answer"]
feedback = data["feedback"]
print("-" * 5)
print("Question: " + question)
print("Answer: " + answer)
print("Feedback: " + feedback)
print("-" * 5)
response = requests.post(
url="https://86x89umx77.execute-api.eu-west-2.amazonaws.com/docsgpt-feedback",
headers={
"Content-Type": "application/json; charset=utf-8",
},
data=json.dumps({"answer": answer, "question": question, "feedback": feedback}),
feedback_collection.insert_one(
{
"question": question,
"answer": answer,
"feedback": feedback,
}
)
return {"status": http.client.responses.get(response.status_code, "ok")}
return {"status": "ok"}
@user.route("/api/delete_by_ids", methods=["get"])
def delete_by_ids():
"""Delete by ID. These are the IDs in the vectorstore"""
ids = request.args.get("path")
if not ids:
return {"status": "error"}
if settings.VECTOR_STORE == "faiss":
result = vectors_collection.delete_index(ids=ids)
if result:
return {"status": "ok"}
return {"status": "error"}
@user.route("/api/delete_old", methods=["get"])
def delete_old():
@ -140,7 +149,9 @@ def upload_file():
os.makedirs(save_dir)
file.save(os.path.join(save_dir, filename))
task = ingest.delay(settings.UPLOAD_FOLDER, [".rst", ".md", ".pdf", ".txt"], job_name, filename, user)
task = ingest.delay(settings.UPLOAD_FOLDER, [".rst", ".md", ".pdf", ".txt", ".docx",
".csv", ".epub", ".html", ".mdx"],
job_name, filename, user)
# task id
task_id = task.id
return {"status": "ok", "task_id": task_id}
@ -173,7 +184,7 @@ def combined_json():
"date": "default",
"docLink": "default",
"model": settings.EMBEDDINGS_NAME,
"location": "local",
"location": "remote",
}
]
# structure: name, language, version, description, fullName, date, docLink
@ -230,6 +241,80 @@ def check_docs():
return {"status": "loaded"}
@user.route("/api/create_prompt", methods=["POST"])
def create_prompt():
data = request.get_json()
content = data["content"]
name = data["name"]
if name == "":
return {"status": "error"}
user = "local"
resp = prompts_collection.insert_one(
{
"name": name,
"content": content,
"user": user,
}
)
new_id = str(resp.inserted_id)
return {"id": new_id}
@user.route("/api/get_prompts", methods=["GET"])
def get_prompts():
user = "local"
prompts = prompts_collection.find({"user": user})
list_prompts = []
list_prompts.append({"id": "default", "name": "default", "type": "public"})
list_prompts.append({"id": "creative", "name": "creative", "type": "public"})
list_prompts.append({"id": "strict", "name": "strict", "type": "public"})
for prompt in prompts:
list_prompts.append({"id": str(prompt["_id"]), "name": prompt["name"], "type": "private"})
return jsonify(list_prompts)
@user.route("/api/get_single_prompt", methods=["GET"])
def get_single_prompt():
prompt_id = request.args.get("id")
if prompt_id == 'default':
with open(os.path.join(current_dir, "prompts", "chat_combine_default.txt"), "r") as f:
chat_combine_template = f.read()
return jsonify({"content": chat_combine_template})
elif prompt_id == 'creative':
with open(os.path.join(current_dir, "prompts", "chat_combine_creative.txt"), "r") as f:
chat_reduce_creative = f.read()
return jsonify({"content": chat_reduce_creative})
elif prompt_id == 'strict':
with open(os.path.join(current_dir, "prompts", "chat_combine_strict.txt"), "r") as f:
chat_reduce_strict = f.read()
return jsonify({"content": chat_reduce_strict})
prompt = prompts_collection.find_one({"_id": ObjectId(prompt_id)})
return jsonify({"content": prompt["content"]})
@user.route("/api/delete_prompt", methods=["POST"])
def delete_prompt():
data = request.get_json()
id = data["id"]
prompts_collection.delete_one(
{
"_id": ObjectId(id),
}
)
return {"status": "ok"}
@user.route("/api/update_prompt", methods=["POST"])
def update_prompt_name():
data = request.get_json()
id = data["id"]
name = data["name"]
content = data["content"]
# check if name is null
if name == "":
return {"status": "error"}
prompts_collection.update_one({"_id": ObjectId(id)},{"$set":{"name":name, "content": content}})
return {"status": "ok"}

@ -1,13 +1,14 @@
from pathlib import Path
from typing import Optional
import os
from pydantic import BaseSettings
from pydantic_settings import BaseSettings
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
class Settings(BaseSettings):
LLM_NAME: str = "openai"
EMBEDDINGS_NAME: str = "openai_text-embedding-ada-002"
LLM_NAME: str = "docsgpt"
EMBEDDINGS_NAME: str = "huggingface_sentence-transformers/all-mpnet-base-v2"
CELERY_BROKER_URL: str = "redis://localhost:6379/0"
CELERY_RESULT_BACKEND: str = "redis://localhost:6379/1"
MONGO_URI: str = "mongodb://localhost:27017/docsgpt"
@ -18,25 +19,25 @@ class Settings(BaseSettings):
API_URL: str = "http://localhost:7091" # backend url for celery worker
API_KEY: str = None # LLM api key
EMBEDDINGS_KEY: str = None # api key for embeddings (if using openai, just copy API_KEY
OPENAI_API_BASE: str = None # azure openai api base url
OPENAI_API_VERSION: str = None # azure openai api version
AZURE_DEPLOYMENT_NAME: str = None # azure deployment name for answering
AZURE_EMBEDDINGS_DEPLOYMENT_NAME: str = None # azure deployment name for embeddings
API_KEY: Optional[str] = None # LLM api key
EMBEDDINGS_KEY: Optional[str] = None # api key for embeddings (if using openai, just copy API_KEY)
OPENAI_API_BASE: Optional[str] = None # azure openai api base url
OPENAI_API_VERSION: Optional[str] = None # azure openai api version
AZURE_DEPLOYMENT_NAME: Optional[str] = None # azure deployment name for answering
AZURE_EMBEDDINGS_DEPLOYMENT_NAME: Optional[str] = None # azure deployment name for embeddings
# elasticsearch
ELASTIC_CLOUD_ID: str = None # cloud id for elasticsearch
ELASTIC_USERNAME: str = None # username for elasticsearch
ELASTIC_PASSWORD: str = None # password for elasticsearch
ELASTIC_URL: str = None # url for elasticsearch
ELASTIC_INDEX: str = "docsgpt" # index name for elasticsearch
ELASTIC_CLOUD_ID: Optional[str] = None # cloud id for elasticsearch
ELASTIC_USERNAME: Optional[str] = None # username for elasticsearch
ELASTIC_PASSWORD: Optional[str] = None # password for elasticsearch
ELASTIC_URL: Optional[str] = None # url for elasticsearch
ELASTIC_INDEX: Optional[str] = "docsgpt" # index name for elasticsearch
# SageMaker config
SAGEMAKER_ENDPOINT: str = None # SageMaker endpoint name
SAGEMAKER_REGION: str = None # SageMaker region name
SAGEMAKER_ACCESS_KEY: str = None # SageMaker access key
SAGEMAKER_SECRET_KEY: str = None # SageMaker secret key
SAGEMAKER_ENDPOINT: Optional[str] = None # SageMaker endpoint name
SAGEMAKER_REGION: Optional[str] = None # SageMaker region name
SAGEMAKER_ACCESS_KEY: Optional[str] = None # SageMaker access key
SAGEMAKER_SECRET_KEY: Optional[str] = None # SageMaker secret key
path = Path(__file__).parent.parent.absolute()

Binary file not shown.

Binary file not shown.

@ -0,0 +1,40 @@
from application.llm.base import BaseLLM
from application.core.settings import settings
class AnthropicLLM(BaseLLM):
def __init__(self, api_key=None):
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
self.api_key = api_key or settings.ANTHROPIC_API_KEY # If not provided, use a default from settings
self.anthropic = Anthropic(api_key=self.api_key)
self.HUMAN_PROMPT = HUMAN_PROMPT
self.AI_PROMPT = AI_PROMPT
def gen(self, model, messages, engine=None, max_tokens=300, stream=False, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
prompt = f"### Context \n {context} \n ### Question \n {user_question}"
if stream:
return self.gen_stream(model, prompt, max_tokens, **kwargs)
completion = self.anthropic.completions.create(
model=model,
max_tokens_to_sample=max_tokens,
stream=stream,
prompt=f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT}",
)
return completion.completion
def gen_stream(self, model, messages, engine=None, max_tokens=300, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
prompt = f"### Context \n {context} \n ### Question \n {user_question}"
stream_response = self.anthropic.completions.create(
model=model,
prompt=f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT}",
max_tokens_to_sample=max_tokens,
stream=True,
)
for completion in stream_response:
yield completion.completion

@ -0,0 +1,49 @@
from application.llm.base import BaseLLM
import json
import requests
class DocsGPTAPILLM(BaseLLM):
def __init__(self, *args, **kwargs):
self.endpoint = "https://llm.docsgpt.co.uk"
def gen(self, model, engine, messages, stream=False, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
response = requests.post(
f"{self.endpoint}/answer",
json={
"prompt": prompt,
"max_new_tokens": 30
}
)
response_clean = response.json()['a'].split("###")[0]
return response_clean
def gen_stream(self, model, engine, messages, stream=True, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
# send prompt to endpoint /stream
response = requests.post(
f"{self.endpoint}/stream",
json={
"prompt": prompt,
"max_new_tokens": 256
},
stream=True
)
for line in response.iter_lines():
if line:
#data = json.loads(line)
data_str = line.decode('utf-8')
if data_str.startswith("data: "):
data = json.loads(data_str[6:])
yield data['a']

@ -2,6 +2,8 @@ from application.llm.openai import OpenAILLM, AzureOpenAILLM
from application.llm.sagemaker import SagemakerAPILLM
from application.llm.huggingface import HuggingFaceLLM
from application.llm.llama_cpp import LlamaCpp
from application.llm.anthropic import AnthropicLLM
from application.llm.docsgpt_provider import DocsGPTAPILLM
@ -11,7 +13,9 @@ class LLMCreator:
'azure_openai': AzureOpenAILLM,
'sagemaker': SagemakerAPILLM,
'huggingface': HuggingFaceLLM,
'llama.cpp': LlamaCpp
'llama.cpp': LlamaCpp,
'anthropic': AnthropicLLM,
'docsgpt': DocsGPTAPILLM
}
@classmethod

@ -5,40 +5,38 @@ class OpenAILLM(BaseLLM):
def __init__(self, api_key):
global openai
import openai
openai.api_key = api_key
self.api_key = api_key # Save the API key to be used later
from openai import OpenAI
self.client = OpenAI(
api_key=api_key,
)
self.api_key = api_key
def _get_openai(self):
# Import openai when needed
import openai
# Set the API key every time you import openai
openai.api_key = self.api_key
return openai
def gen(self, model, engine, messages, stream=False, **kwargs):
response = openai.ChatCompletion.create(
model=model,
engine=engine,
response = self.client.chat.completions.create(model=model,
messages=messages,
stream=stream,
**kwargs
)
**kwargs)
return response["choices"][0]["message"]["content"]
return response.choices[0].message.content
def gen_stream(self, model, engine, messages, stream=True, **kwargs):
response = openai.ChatCompletion.create(
model=model,
engine=engine,
response = self.client.chat.completions.create(model=model,
messages=messages,
stream=stream,
**kwargs
)
**kwargs)
for line in response:
if "content" in line["choices"][0]["delta"]:
yield line["choices"][0]["delta"]["content"]
# import sys
# print(line.choices[0].delta.content, file=sys.stderr)
if line.choices[0].delta.content is not None:
yield line.choices[0].delta.content
class AzureOpenAILLM(OpenAILLM):
@ -48,10 +46,15 @@ class AzureOpenAILLM(OpenAILLM):
self.api_base = settings.OPENAI_API_BASE,
self.api_version = settings.OPENAI_API_VERSION,
self.deployment_name = settings.AZURE_DEPLOYMENT_NAME,
from openai import AzureOpenAI
self.client = AzureOpenAI(
api_key=openai_api_key,
api_version=settings.OPENAI_API_VERSION,
api_base=settings.OPENAI_API_BASE,
deployment_name=settings.AZURE_DEPLOYMENT_NAME,
)
def _get_openai(self):
openai = super()._get_openai()
openai.api_base = self.api_base
openai.api_version = self.api_version
openai.api_type = "azure"
return openai

@ -62,7 +62,6 @@ class SimpleDirectoryReader(BaseReader):
file_extractor: Optional[Dict[str, BaseParser]] = None,
num_files_limit: Optional[int] = None,
file_metadata: Optional[Callable[[str], Dict]] = None,
chunk_size_max: int = 2048,
) -> None:
"""Initialize with parameters."""
super().__init__()

@ -0,0 +1,51 @@
from urllib.parse import urlparse
from openapi_parser import parse
try:
from application.parser.file.base_parser import BaseParser
except ModuleNotFoundError:
from base_parser import BaseParser
class OpenAPI3Parser(BaseParser):
def init_parser(self) -> None:
return super().init_parser()
def get_base_urls(self, urls):
base_urls = []
for i in urls:
parsed_url = urlparse(i)
base_url = parsed_url.scheme + "://" + parsed_url.netloc
if base_url not in base_urls:
base_urls.append(base_url)
return base_urls
def get_info_from_paths(self, path):
info = ""
if path.operations:
for operation in path.operations:
info += (
f"\n{operation.method.value}="
f"{operation.responses[0].description}"
)
return info
def parse_file(self, file_path):
data = parse(file_path)
results = ""
base_urls = self.get_base_urls(link.url for link in data.servers)
base_urls = ",".join([base_url for base_url in base_urls])
results += f"Base URL:{base_urls}\n"
i = 1
for path in data.paths:
info = self.get_info_from_paths(path)
results += (
f"Path{i}: {path.url}\n"
f"description: {path.description}\n"
f"parameters: {path.parameters}\nmethods: {info}\n"
)
i += 1
with open("results.txt", "w") as f:
f.write(results)
return results

@ -6,9 +6,9 @@ from application.core.settings import settings
from retry import retry
# from langchain.embeddings import HuggingFaceEmbeddings
# from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain.embeddings import CohereEmbeddings
# from langchain_community.embeddings import HuggingFaceEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import CohereEmbeddings
def num_tokens_from_string(string: str, encoding_name: str) -> int:

@ -0,0 +1,9 @@
You are a helpful AI assistant, DocsGPT, specializing in document assistance, designed to offer detailed and informative responses.
If appropriate, your answers can include code examples, formatted as follows:
```(language)
(code)
```
You effectively utilize chat history, ensuring relevant and tailored responses.
If a question doesn't align with your context, you provide friendly and helpful replies.
----------------
{summaries}

@ -0,0 +1,13 @@
You are an AI Assistant, DocsGPT, adept at offering document assistance.
Your expertise lies in providing answer on top of provided context.
You can leverage the chat history if needed.
Answer the question based on the context below.
Keep the answer concise. Respond "Irrelevant context" if not sure about the answer.
If question is not related to the context, respond "Irrelevant context".
When using code examples, use the following format:
```(language)
(code)
```
----------------
Context:
{summaries}

@ -1,25 +0,0 @@
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
QUESTION: How to merge tables in pandas?
=========
Content: pandas provides various facilities for easily combining together Series or DataFrame with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
Source: 28-pl
Content: pandas provides a single function, merge(), as the entry point for all standard database join operations between DataFrame or named Series objects: \n\npandas.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)
Source: 30-pl
=========
FINAL ANSWER: To merge two tables in pandas, you can use the pd.merge() function. The basic syntax is: \n\npd.merge(left, right, on, how) \n\nwhere left and right are the two tables to merge, on is the column to merge on, and how is the type of merge to perform. \n\nFor example, to merge the two tables df1 and df2 on the column 'id', you can use: \n\npd.merge(df1, df2, on='id', how='inner')
SOURCES: 28-pl 30-pl
QUESTION: How are you?
=========
CONTENT:
SOURCE:
=========
FINAL ANSWER: I am fine, thank you. How are you?
SOURCES:
QUESTION: {{ question }}
=========
{{ summaries }}
=========
FINAL ANSWER:

@ -1,33 +0,0 @@
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
QUESTION: How to merge tables in pandas?
=========
Content: pandas provides various facilities for easily combining together Series or DataFrame with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
Source: 28-pl
Content: pandas provides a single function, merge(), as the entry point for all standard database join operations between DataFrame or named Series objects: \n\npandas.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)
Source: 30-pl
=========
FINAL ANSWER: To merge two tables in pandas, you can use the pd.merge() function. The basic syntax is: \n\npd.merge(left, right, on, how) \n\nwhere left and right are the two tables to merge, on is the column to merge on, and how is the type of merge to perform. \n\nFor example, to merge the two tables df1 and df2 on the column 'id', you can use: \n\npd.merge(df1, df2, on='id', how='inner')
SOURCES: 28-pl 30-pl
QUESTION: How are you?
=========
CONTENT:
SOURCE:
=========
FINAL ANSWER: I am fine, thank you. How are you?
SOURCES:
QUESTION: {{ historyquestion }}
=========
CONTENT:
SOURCE:
=========
FINAL ANSWER: {{ historyanswer }}
SOURCES:
QUESTION: {{ question }}
=========
{{ summaries }}
=========
FINAL ANSWER:

@ -1,4 +0,0 @@
Use the following portion of a long document to see if any of the text is relevant to answer the question.
{{ context }}
Question: {{ question }}
Provide all relevant text to the question verbatim. Summarize if needed. If nothing relevant return "-".

@ -1,106 +1,33 @@
aiodns==3.0.0
aiohttp==3.8.5
aiohttp-retry==2.8.3
aiosignal==1.3.1
aleph-alpha-client==2.16.1
amqp==5.1.1
async-timeout==4.0.2
attrs==22.2.0
billiard==3.6.4.0
blobfile==2.0.1
boto3==1.28.20
celery==5.2.7
cffi==1.15.1
charset-normalizer==3.1.0
click==8.1.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
cryptography==41.0.4
dataclasses-json==0.5.7
decorator==5.1.1
dill==0.3.6
dnspython==2.3.0
ecdsa==0.18.0
elasticsearch==8.9.0
entrypoints==0.4
faiss-cpu==1.7.3
filelock==3.9.0
Flask==2.2.5
Flask-Cors==3.0.10
frozenlist==1.3.3
geojson==2.5.0
gunicorn==20.1.0
greenlet==2.0.2
gpt4all==0.1.7
huggingface-hub==0.15.1
humbug==0.3.2
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
kombu==5.2.4
langchain==0.0.308
loguru==0.6.0
lxml==4.9.2
MarkupSafe==2.1.2
marshmallow==3.19.0
marshmallow-enum==1.5.1
mpmath==1.3.0
multidict==6.0.4
multiprocess==0.70.14
mypy-extensions==1.0.0
networkx==3.0
npx
anthropic==0.12.0
boto3==1.34.6
celery==5.3.6
dataclasses_json==0.6.3
docx2txt==0.8
EbookLib==0.18
elasticsearch==8.12.0
escodegen==1.0.11
esprima==4.0.1
faiss-cpu==1.7.4
Flask==3.0.1
gunicorn==21.2.0
html2text==2020.1.16
javalang==0.13.0
langchain==0.1.4
langchain-openai==0.0.5
nltk==3.8.1
numcodecs==0.11.0
numpy==1.24.2
openai==0.27.8
packaging==23.0
pathos==0.3.0
Pillow==10.0.1
pox==0.3.2
ppft==1.7.6.6
prompt-toolkit==3.0.38
py==1.11.0
pyasn1==0.4.8
pycares==4.3.0
pycparser==2.21
pycryptodomex==3.17
pycryptodome==3.19.0
pydantic==1.10.5
PyJWT==2.6.0
pymongo==4.3.3
pyowm==3.3.0
openapi3_parser==1.1.16
pandas==2.2.0
pydantic_settings==2.1.0
pymongo==4.6.1
PyPDF2==3.0.1
PySocks==1.7.1
pytest
python-dateutil==2.8.2
python-dotenv==1.0.0
python-jose==3.3.0
pytz==2022.7.1
PyYAML==6.0
redis==4.5.4
regex==2022.10.31
requests==2.31.0
python-dotenv==1.0.1
redis==5.0.1
Requests==2.31.0
retry==0.9.2
rsa==4.9
scikit-learn==1.2.2
scipy==1.10.1
sentencepiece
six==1.16.0
SQLAlchemy==1.4.46
sympy==1.11.1
tenacity==8.2.2
threadpoolctl==3.1.0
tiktoken
tqdm==4.65.0
transformers==4.30.0
typer==0.7.0
typing-inspect==0.8.0
typing_extensions==4.5.0
urllib3==1.26.17
vine==5.0.0
wcwidth==0.2.6
yarl==1.8.2
sentence-transformers
tiktoken==0.5.2
torch==2.1.2
tqdm==4.66.1
transformers==4.36.2
unstructured==0.12.2
Werkzeug==3.0.1

@ -1,11 +1,11 @@
from abc import ABC, abstractmethod
import os
from langchain.embeddings import (
OpenAIEmbeddings,
from langchain_community.embeddings import (
HuggingFaceEmbeddings,
CohereEmbeddings,
HuggingFaceInstructEmbeddings,
)
from langchain_openai import OpenAIEmbeddings
from application.core.settings import settings
class BaseVectorStore(ABC):
@ -44,6 +44,11 @@ class BaseVectorStore(ABC):
embedding_instance = embeddings_factory[embeddings_name](
cohere_api_key=embeddings_key
)
elif embeddings_name == "huggingface_sentence-transformers/all-mpnet-base-v2":
embedding_instance = embeddings_factory[embeddings_name](
#model_name="./model/all-mpnet-base-v2",
model_kwargs={"device": "cpu"},
)
else:
embedding_instance = embeddings_factory[embeddings_name]()

@ -0,0 +1,8 @@
class Document(str):
"""Class for storing a piece of text and associated metadata."""
def __new__(cls, page_content: str, metadata: dict):
instance = super().__new__(cls, page_content)
instance.page_content = page_content
instance.metadata = metadata
return instance

@ -1,16 +1,8 @@
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
from application.vectorstore.document_class import Document
import elasticsearch
class Document(str):
"""Class for storing a piece of text and associated metadata."""
def __new__(cls, page_content: str, metadata: dict):
instance = super().__new__(cls, page_content)
instance.page_content = page_content
instance.metadata = metadata
return instance

@ -1,5 +1,5 @@
from langchain_community.vectorstores import FAISS
from application.vectorstore.base import BaseVectorStore
from langchain.vectorstores import FAISS
from application.core.settings import settings
class FaissStore(BaseVectorStore):
@ -7,20 +7,40 @@ class FaissStore(BaseVectorStore):
def __init__(self, path, embeddings_key, docs_init=None):
super().__init__()
self.path = path
embeddings = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
if docs_init:
self.docsearch = FAISS.from_documents(
docs_init, self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
docs_init, embeddings
)
else:
self.docsearch = FAISS.load_local(
self.path, self._get_embeddings(settings.EMBEDDINGS_NAME, settings.EMBEDDINGS_KEY)
self.path, embeddings
)
self.assert_embedding_dimensions(embeddings)
def search(self, *args, **kwargs):
return self.docsearch.similarity_search(*args, **kwargs)
def add_texts(self, *args, **kwargs):
return self.docsearch.add_texts(*args, **kwargs)
def save_local(self, *args, **kwargs):
return self.docsearch.save_local(*args, **kwargs)
def delete_index(self, *args, **kwargs):
return self.docsearch.delete(*args, **kwargs)
def assert_embedding_dimensions(self, embeddings):
"""
Check that the word embedding dimension of the docsearch index matches
the dimension of the word embeddings used
"""
if settings.EMBEDDINGS_NAME == "huggingface_sentence-transformers/all-mpnet-base-v2":
try:
word_embedding_dimension = embeddings.client[1].word_embedding_dimension
except AttributeError as e:
raise AttributeError("word_embedding_dimension not found in embeddings.client[1]") from e
docsearch_index_dimension = self.docsearch.index.d
if word_embedding_dimension != docsearch_index_dimension:
raise ValueError(f"word_embedding_dimension ({word_embedding_dimension}) " +
f"!= docsearch_index_word_embedding_dimension ({docsearch_index_dimension})")

@ -0,0 +1,126 @@
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
from application.vectorstore.document_class import Document
class MongoDBVectorStore(BaseVectorStore):
def __init__(
self,
path: str = "",
embeddings_key: str = "embeddings",
collection: str = "documents",
index_name: str = "vector_search_index",
text_key: str = "text",
embedding_key: str = "embedding",
database: str = "docsgpt",
):
self._index_name = index_name
self._text_key = text_key
self._embedding_key = embedding_key
self._embeddings_key = embeddings_key
self._mongo_uri = settings.MONGO_URI
self._path = path.replace("application/indexes/", "").rstrip("/")
self._embedding = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
try:
import pymongo
except ImportError:
raise ImportError(
"Could not import pymongo python package. "
"Please install it with `pip install pymongo`."
)
self._client = pymongo.MongoClient(self._mongo_uri)
self._database = self._client[database]
self._collection = self._database[collection]
def search(self, question, k=2, *args, **kwargs):
query_vector = self._embedding.embed_query(question)
pipeline = [
{
"$vectorSearch": {
"queryVector": query_vector,
"path": self._embedding_key,
"limit": k,
"numCandidates": k * 10,
"index": self._index_name,
"filter": {
"store": {"$eq": self._path}
}
}
}
]
cursor = self._collection.aggregate(pipeline)
results = []
for doc in cursor:
text = doc[self._text_key]
doc.pop("_id")
doc.pop(self._text_key)
doc.pop(self._embedding_key)
metadata = doc
results.append(Document(text, metadata))
return results
def _insert_texts(self, texts, metadatas):
if not texts:
return []
embeddings = self._embedding.embed_documents(texts)
to_insert = [
{self._text_key: t, self._embedding_key: embedding, **m}
for t, m, embedding in zip(texts, metadatas, embeddings)
]
# insert the documents in MongoDB Atlas
insert_result = self._collection.insert_many(to_insert)
return insert_result.inserted_ids
def add_texts(self,
texts,
metadatas = None,
ids = None,
refresh_indices = True,
create_index_if_not_exists = True,
bulk_kwargs = None,
**kwargs,):
#dims = self._embedding.client[1].word_embedding_dimension
# # check if index exists
# if create_index_if_not_exists:
# # check if index exists
# info = self._collection.index_information()
# if self._index_name not in info:
# index_mongo = {
# "fields": [{
# "type": "vector",
# "path": self._embedding_key,
# "numDimensions": dims,
# "similarity": "cosine",
# },
# {
# "type": "filter",
# "path": "store"
# }]
# }
# self._collection.create_index(self._index_name, index_mongo)
batch_size = 100
_metadatas = metadatas or ({} for _ in texts)
texts_batch = []
metadatas_batch = []
result_ids = []
for i, (text, metadata) in enumerate(zip(texts, _metadatas)):
texts_batch.append(text)
metadatas_batch.append(metadata)
if (i + 1) % batch_size == 0:
result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))
texts_batch = []
metadatas_batch = []
if texts_batch:
result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))
return result_ids
def delete_index(self, *args, **kwargs):
self._collection.delete_many({"store": self._path})

@ -1,11 +1,13 @@
from application.vectorstore.faiss import FaissStore
from application.vectorstore.elasticsearch import ElasticsearchStore
from application.vectorstore.mongodb import MongoDBVectorStore
class VectorCreator:
vectorstores = {
'faiss': FaissStore,
'elasticsearch':ElasticsearchStore
'elasticsearch':ElasticsearchStore,
'mongodb': MongoDBVectorStore,
}
@classmethod

@ -21,17 +21,34 @@ except FileExistsError:
pass
# Define a function to extract metadata from a given filename.
def metadata_from_filename(title):
store = '/'.join(title.split('/')[1:3])
return {'title': title, 'store': store}
# Define a function to generate a random string of a given length.
def generate_random_string(length):
return ''.join([string.ascii_letters[i % 52] for i in range(length)])
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Define the main function for ingesting and processing documents.
def ingest_worker(self, directory, formats, name_job, filename, user):
"""
Ingest and process documents.
Args:
self: Reference to the instance of the task.
directory (str): Specifies the directory for ingesting ('inputs' or 'temp').
formats (list of str): List of file extensions to consider for ingestion (e.g., [".rst", ".md"]).
name_job (str): Name of the job for this ingestion task.
filename (str): Name of the file to be ingested.
user (str): Identifier for the user initiating the ingestion.
Returns:
dict: Information about the completed ingestion task, including input parameters and a "limited" flag.
"""
# directory = 'inputs' or 'temp'
# formats = [".rst", ".md"]
input_files = None

@ -1,2 +1,2 @@
ignore:
- "*/tests/*”
- "*/tests/*"

@ -0,0 +1,22 @@
version: "3.9"
services:
frontend:
build: ./frontend
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
ports:
- "5173:5173"
depends_on:
- mock-backend
mock-backend:
build: ./mock-backend
ports:
- "7091:7091"
redis:
image: redis:6-alpine
ports:
- 6379:6379

@ -16,6 +16,7 @@ services:
environment:
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
- LLM_NAME=$LLM_NAME
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
@ -35,6 +36,7 @@ services:
environment:
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
- LLM_NAME=$LLM_NAME
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt

@ -1 +1,51 @@
# nextra-docsgpt
# nextra-docsgpt
## Setting Up Docs Folder of DocsGPT Locally
### 1. Clone the DocsGPT repository:
```bash
git clone https://github.com/arc53/DocsGPT.git
```
### 2. Navigate to the docs folder:
```bash
cd DocsGPT/docs
```
The docs folder contains the markdown files that make up the documentation. The majority of the files are in the pages directory. Some notable files in this folder include:
`index.mdx`: The main documentation file.
`_app.js`: This file is used to customize the default Next.js application shell.
`theme.config.jsx`: This file is for configuring the Nextra theme for the documentation.
### 3. Verify that you have Node.js and npm installed in your system. You can check by running:
```bash
node --version
npm --version
```
### 4. If not installed, download Node.js and npm from the respective official websites.
### 5. Once you have Node.js and npm running, proceed to install yarn - another package manager that helps to manage project dependencies:
```bash
npm install --global yarn
```
### 6. Install the project dependencies using yarn:
```bash
yarn install
```
### 7. After the successful installation of the project dependencies, start the local server:
```bash
yarn dev
```
- Now, you should be able to view the docs on your local environment by visiting `http://localhost:5000`. You can explore the different markdown files and make changes as you see fit.
- **Footnotes:** This guide assumes you have Node.js and npm installed. The guide involves running a local server using yarn, and viewing the documentation offline. If you encounter any issues, it may be worth verifying your Node.js and npm installations and whether you have installed yarn correctly.

@ -1,9 +1,9 @@
const withNextra = require('nextra')({
theme: 'nextra-theme-docs',
themeConfig: './theme.config.jsx'
})
theme: 'nextra-theme-docs',
themeConfig: './theme.config.jsx'
})
module.exports = withNextra()
module.exports = withNextra()
// If you have other Next.js configurations, you can pass them as the parameter:
// module.exports = withNextra({ /* other next.js config */ })
// If you have other Next.js configurations, you can pass them as the parameter:
// module.exports = withNextra({ /* other next.js config */ })

1953
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

@ -1,10 +1,16 @@
{
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"license": "MIT",
"dependencies": {
"@vercel/analytics": "^1.0.2",
"@vercel/analytics": "^1.1.1",
"docsgpt": "^0.2.4",
"next": "^13.4.19",
"nextra": "^2.12.3",
"nextra-theme-docs": "^2.12.3",
"next": "^14.0.4",
"nextra": "^2.13.2",
"nextra-theme-docs": "^2.13.2",
"react": "^18.2.0",
"react-dom": "^18.2.0"
}

@ -1,42 +1,35 @@
# Self-hosting DocsGPT on Amazon Lightsail
Here's a step-by-step guide on how to setup an Amazon Lightsail instance to host DocsGPT.
Here's a step-by-step guide on how to set up an Amazon Lightsail instance to host DocsGPT.
## Configuring your instance
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here).
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking [here](#connecting-to-your-newly-created-instance)).
### 1. Create an account or login to https://lightsail.aws.amazon.com
### 1. Create an AWS Account:
If you haven't already, create or log in to your AWS account at https://lightsail.aws.amazon.com.
### 2. Click on "Create instance"
### 2. Create an Instance:
### 3. Create your instance
a. Click "Create Instance."
The first step is to select the "Instance location". In most cases, there's no need to switch locations as the default one will work well.
b. Select the "Instance location." In most cases, the default location works fine.
After that, it is time to pick your Instance Image. We recommend using "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
c. Choose "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
As for instance plan, it'll vary depending on your unique demands, but a "1 GB, 1vCPU, 40GB SSD and 2TB transfer" setup should cover most scenarios.
d. Configure the instance plan based on your requirements. A "1 GB, 1vCPU, 40GB SSD, and 2TB transfer" setup is recommended for most scenarios.
Lastly, identify your instance by giving it a unique name and then hit "Create instance".
e. Give your instance a unique name and click "Create Instance."
PS: Once you create your instance, it'll likely take a few minutes for the setup to be completed.
PS: It may take a few minutes for the instance setup to complete.
#### The recommended configuration is as follows:
### Connecting to Your newly created Instance
- Ubuntu 20.04 LTS
- 1GB RAM
- 1vCPU
- 40GB SSD Hard Drive
- 2TB transfer
Your instance will be ready a few minutes after creation. To access it, open the instance and click "Connect using SSH."
### Connecting to your newly created instance
#### Clone the DocsGPT Repository
Your instance will be ready for use a few minutes after being created. To access it, just open it up and click on "Connect using SSH".
#### Clone the repository
A terminal window will pop up, and the first step will be to clone the DocsGPT git repository:
A terminal window will pop up, and the first step will be to clone the DocsGPT Git repository:
`git clone https://github.com/arc53/DocsGPT.git`
@ -56,15 +49,15 @@ And now install docker-compose:
`sudo apt install docker-compose`
#### Access the DocsGPT folder
#### Access the DocsGPT Folder
Enter the following command to access the folder in which DocsGPT docker-compose file is present.
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
`cd DocsGPT/`
#### Prepare the environment
#### Prepare the Environment
Inside the DocsGPT folder, create a `.env` file and copy the contents of `.env_sample` into it.
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
`nano .env`
@ -78,16 +71,16 @@ SELF_HOSTED_MODEL=false
To save the file, press CTRL+X, then Y, and then ENTER.
Next, we need to set a correct IP for our Backend. To do so, open the docker-compose.yml file:
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
`nano docker-compose.yml`
And change this line 7 `VITE_API_HOST=http://localhost:7091`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
This will allow the frontend to connect to the backend.
#### Running the app
#### Running the Application
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
@ -95,18 +88,23 @@ You're almost there! Now that all the necessary bits and pieces have been instal
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
Once this is done, you can go ahead and close the terminal window.
Once this is done you can go ahead and close the terminal window.
#### Enabling ports
#### Enabling Ports
Before you are able to access your live instance, you must first enable the port that it is using.
a. Before you are able to access your live instance, you must first enable the port that it is using.
Open your Lightsail instance and head to "Networking".
b. Open your Lightsail instance and head to "Networking".
Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
c. Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
Repeat the process for port `7091`.
#### Access your instance
Your instance will now be available under your Public IP Address and port `5173`. Enjoy!
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!
## Other Deployment Options
- [Deploy DocsGPT on Civo Compute Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c)
- [Deploy DocsGPT on DigitalOcean Droplet](https://dev.to/rutamhere/deploying-docsgpt-on-digitalocean-droplet-50ea)
- [Deploy DocsGPT on Kamatera Performance Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-kamatera-performance-cloud-1bj)

@ -1,31 +1,128 @@
## Launching Web App
Note: Make sure you have Docker installed
**Note**: Make sure you have Docker installed
On macOS or Linux, just write:
**On macOS or Linux:**
Just run the following command:
`./setup.sh`
```bash
./setup.sh
```
It will install all the dependencies and give you an option to download the local model or use OpenAI
This command will install all the necessary dependencies and provide you with an option to use our LLM API, download the local model or use OpenAI.
Otherwise, refer to this Guide:
If you prefer to follow manual steps, refer to this guide:
1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`.
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys).
3. Run `docker-compose build && docker-compose up`.
4. Navigate to `http://localhost:5173/`.
1. Open and download this repository with
```bash
git clone https://github.com/arc53/DocsGPT.git
```
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys). (optional in case you want to use OpenAI)
3. Run the following commands:
```bash
docker-compose build && docker-compose up
```
4. Navigate to http://localhost:5173/.
To stop, simply press **Ctrl + C**.
**For WINDOWS:**
To run the setup on Windows, you have two options: using the Windows Subsystem for Linux (WSL) or using Git Bash or Command Prompt.
**Option 1: Using Windows Subsystem for Linux (WSL):**
1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install).
2. After setting up WSL, open the WSL terminal.
3. Clone the repository and create the `.env` file:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
```bash
./run-with-docker-compose.sh
```
6. Open your web browser and navigate to http://localhost:5173/.
7. To stop the setup, just press **Ctrl + C** in the WSL terminal
**Option 2: Using Git Bash or Command Prompt (CMD):**
1. Install Git for Windows if you haven't already. Download it from the official website: (https://gitforwindows.org/).
2. Open Git Bash or Command Prompt.
3. Clone the repository and create the `.env` file:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
```bash
./run-with-docker-compose.sh
```
5. Open your web browser and navigate to http://localhost:5173/.
6. To stop the setup, just press **Ctrl + C** in the Git Bash or Command Prompt terminal.
These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt.
**Important:** Ensure that Docker is installed and properly configured on your Windows system for these steps to work.
For WINDOWS:
To run the given setup on Windows, you can use the Windows Subsystem for Linux (WSL) or a Git Bash terminal to execute similar commands. Here are the steps adapted for Windows:
Option 1: Using Windows Subsystem for Linux (WSL):
1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install).
2. After setting up WSL, open the WSL terminal.
3. Clone the repository and create the `.env` file:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
```bash
./run-with-docker-compose.sh
```
5. Open your web browser and navigate to http://localhost:5173/.
6. To stop the setup, just press **Ctrl + C** in the WSL terminal.
Option 2: Using Git Bash or Command Prompt (CMD):
1. Install Git for Windows if you haven't already. You can download it from the official website: (https://gitforwindows.org/).
2. Open Git Bash or Command Prompt.
3. Clone the repository and create the `.env` file:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
```bash
./run-with-docker-compose.sh
```
5. Open your web browser and navigate to http://localhost:5173/.
6. To stop the setup, just press **Ctrl + C** in the Git Bash or Command Prompt terminal.
These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt. Make sure you have Docker installed and properly configured on your Windows system for this to work.
To stop, just run `Ctrl + C`.
### Chrome Extension
To install the Chrome extension:
#### Installing the Chrome extension:
To enhance your DocsGPT experience, you can install the DocsGPT Chrome extension. Here's how:
1. In the DocsGPT GitHub repository, click on the "Code" button and select "Download ZIP".
1. In the DocsGPT GitHub repository, click on the **Code** button and select **Download ZIP**.
2. Unzip the downloaded file to a location you can easily access.
3. Open the Google Chrome browser and click on the three dots menu (upper right corner).
4. Select "More Tools" and then "Extensions".
5. Turn on the "Developer mode" switch in the top right corner of the Extensions page.
6. Click on the "Load unpacked" button.
7. Select the "Chrome" folder where the DocsGPT files have been unzipped (docsgpt-main > extensions > chrome).
4. Select **More Tools** and then **Extensions**.
5. Turn on the **Developer mode** switch in the top right corner of the **Extensions page**.
6. Click on the **Load unpacked** button.
7. Select the **Chrome** folder where the DocsGPT files have been unzipped (docsgpt-main > extensions > chrome).
8. The extension should now be added to Google Chrome and can be managed on the Extensions page.
9. To disable or remove the extension, simply turn off the toggle switch on the extension card or click the "Remove" button.
9. To disable or remove the extension, simply turn off the toggle switch on the extension card or click the **Remove** button.

@ -0,0 +1,254 @@
# Self-hosting DocsGPT on Railway
Here's a step-by-step guide on how to host DocsGPT on Railway App.
At first Clone and set up the project locally to run , test and Modify.
### 1. Clone and GitHub SetUp
a. Open Terminal (Windows Shell or Git bash(recommended)).
b. Type `git clone https://github.com/arc53/DocsGPT.git`
#### Download the package information
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
`sudo apt update`
#### Install Docker and Docker Compose
DocsGPT backend and worker use Python, Frontend is written on React and the whole application is containerized using Docker. To install Docker and Docker Compose, enter the following commands:
`sudo apt install docker.io`
And now install docker-compose:
`sudo apt install docker-compose`
#### Access the DocsGPT Folder
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
`cd DocsGPT/`
#### Prepare the Environment
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
`nano .env`
Make sure your `.env` file looks like this:
```
OPENAI_API_KEY=(Your OpenAI API key)
VITE_API_STREAMING=true
SELF_HOSTED_MODEL=false
```
To save the file, press CTRL+X, then Y, and then ENTER.
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
`nano docker-compose.yml`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
This will allow the frontend to connect to the backend.
#### Running the Application
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
`sudo docker-compose up -d`
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
Once this is done you can go ahead and close the terminal window.
### 2. Pushing it to your own Repository
a. Create a Repository on your GitHub.
b. Open Terminal in the same directory of the Cloned project.
c. Type `git init`
d. `git add .`
e. `git commit -m "first-commit"`
f. `git remote add origin <your repository link>`
g. `git push git push --set-upstream origin master`
Your local files will now be pushed to your GitHub Account. :)
### 3. Create a Railway Account:
If you haven't already, create or log in to your railway account do it by visiting [Railway](https://railway.app/)
Signup via **GitHub** [Recommended].
### 4. Start New Project:
a. Open Railway app and Click on "Start New Project."
b. Choose any from the list of options available (Recommended "**Deploy from GitHub Repo**")
c. Choose the required Repository from your GitHub.
d. Configure and allow access to modify your GitHub content from the pop-up window.
e. Agree to all the terms and conditions.
PS: It may take a few minutes for the account setup to complete.
#### You will get A free trial of $5 (use it for trial and then purchase if satisfied and needed)
### 5. Connecting to Your newly Railway app with GitHub
a. Choose DocsGPT repo from the list of your GitHub repository that you want to deploy now.
b. Click on Deploy now.
![Three Tabs will be there](/Railway-selection.png)
c. Select Variables Tab.
d. Upload the env file here that you used for local setup.
e. Go to Settings Tab now.
f. Go to "Networking" and click on Generate Domain Name, to get the URL of your hosted project.
g. You can update the Root directory, build command, installation command as per need.
*[However recommended not the disturb these options and leave them as default if not that needed.]*
Your own DocsGPT is now available at the Generated domain URl. :)

@ -6,5 +6,9 @@
"Quickstart": {
"title": "⚡Quickstart",
"href": "/Deploying/Quickstart"
},
"Railway-Deploying": {
"title": "🚂Deploying on Railway",
"href": "/Deploying/Railway-Deploying"
}
}
}

@ -1,9 +1,27 @@
Currently, the application provides the following main API endpoints:
# API Endpoints Documentation
### /api/answer
It's a POST request that sends a JSON in body with 4 values. It will receive an answer for a user provided question.
Here is a JavaScript fetch example:
*Currently, the application provides the following main API endpoints:*
### 1. /api/answer
**Description:**
This endpoint is used to request answers to user-provided questions.
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the following fields:
* `question` — The user's question.
* `history` — (Optional) Previous conversation history.
* `api_key`— Your API key.
* `embeddings_key` — Your embeddings key.
* `active_docs` — The location of active documentation.
Here is a JavaScript Fetch Request example:
```js
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
@ -18,22 +36,33 @@ fetch("http://127.0.0.1:5000/api/answer", {
.then(console.log.bind(console))
```
In response, you will get a JSON document like this one:
**Response**
In response, you will get a JSON document containing the `answer`, `query` and `result`:
```json
{
"answer": " Hi there! How can I help you?\n",
"answer": "Hi there! How can I help you?\n",
"query": "Hi",
"result": " Hi there! How can I help you?\nSOURCES:"
"result": "Hi there! How can I help you?\nSOURCES:"
}
```
### /api/docs_check
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
It's a POST request that sends a JSON in a body with 1 value. Here is a JavaScript fetch example:
### 2. /api/docs_check
**Description:**
This endpoint will make sure documentation is loaded on the server (just run it every time user is switching between libraries (documentations)).
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the field:
* `docs` — The location of the documentation:
```js
// answer (POST http://127.0.0.1:5000/api/docs_check)
// docs_check (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
"method": "POST",
"headers": {
@ -45,7 +74,9 @@ fetch("http://127.0.0.1:5000/api/docs_check", {
.then(console.log.bind(console))
```
In response, you will get a JSON document like this one:
**Response:**
In response, you will get a JSON document like this one indicating whether the documentation exists or not:
```json
{
"status": "exists"
@ -53,18 +84,43 @@ In response, you will get a JSON document like this one:
```
### /api/combine
Provides JSON that tells UI which vectors are available and where they are located with a simple get request.
### 3. /api/combine
**Description:**
This endpoint provides information about available vectors and their locations with a simple GET request.
**Request:**
**Method**: `GET`
**Response:**
Response will include:
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.
* `date`
* `description`
* `docLink`
* `fullName`
* `language`
* `location` (local or docshub)
* `model`
* `name`
* `version`
Example of JSON in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
### 4. /api/upload
**Description:**
This endpoint is used to upload a file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress.
**Request:**
**Method**: `POST`
**Request Body**: A multipart/form-data form with file upload and additional fields, including `user` and `name`.
### /api/upload
Uploads file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress
HTML example:
```html
@ -79,20 +135,26 @@ HTML example:
</form>
```
Response:
```json
{
"status": "ok",
"task_id": "b2684988-9047-428b-bd47-08518679103c"
}
**Response:**
```
JSON response with a status and a task ID that can be used to check the task's progress.
### 5. /api/task_status
**Description:**
### /api/task_status
Gets task status (`task_id`) from `/api/upload`:
This endpoint is used to get the status of a task (`task_id`) from `/api/upload`
**Request:**
**Method**: `GET`
**Query Parameter**: `task_id` (task ID to check)
**Sample JavaScript Fetch Request:**
```js
// Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
fetch("http://localhost:5001/api/task_status?task_id=YOUR_TASK_ID", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
@ -102,41 +164,53 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
.then(console.log.bind(console))
```
Responses:
**Response:**
There are two types of responses:
1. While task is still running, where "current" will show progress from 0 to 100:
```json
{
"result": {
"current": 1
},
"status": "PROGRESS"
}
```
2. When task is completed:
```json
{
"result": {
"directory": "temp",
"filename": "install.rst",
"formats": [
".rst",
".md",
".pdf"
],
"name_job": "somename",
"user": "local"
},
"status": "SUCCESS"
}
```
1. While the task is still running, the 'current' value will show progress from 0 to 100.
```json
{
"result": {
"current": 1
},
"status": "PROGRESS"
}
```
### /api/delete_old
Deletes old Vector stores:
2. When task is completed:
```json
{
"result": {
"directory": "temp",
"filename": "install.rst",
"formats": [
".rst",
".md",
".pdf"
],
"name_job": "somename",
"user": "local"
},
"status": "SUCCESS"
}
```
### 6. /api/delete_old
**Description:**
This endpoint is used to delete old Vector Stores.
**Request:**
**Method**: `GET`
**Query Parameter**: `task_id`
**Sample JavaScript Fetch Request:**
```js
// Task status (GET http://127.0.0.1:5000/api/docs_check)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
// delete_old (GET http://127.0.0.1:5000/api/delete_old)
fetch("http://localhost:5001/api/delete_old?task_id=YOUR_TASK_ID", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
@ -144,9 +218,11 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
Response:
JSON response indicating the status of the operation:
```json
{ "status": "ok" }

@ -1,29 +1,44 @@
### To start chatwoot extension:
1. Prepare and start the DocsGPT itself (load your documentation too). Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data.
2. Go to chatwoot, **Navigate** to your profile (bottom left), click on profile settings, scroll to the bottom and copy **Access Token**.
3. Navigate to `/extensions/chatwoot`. Copy `.env_sample` and create `.env` file.
4. Fill in the values.
## Chatwoot Extension Setup Guide
```
docsgpt_url=<docsgpt_api_url>
chatwoot_url=<chatwoot_url>
docsgpt_key=<openai_api_key or other llm key>
chatwoot_token=<from part 2>
```
### Step 1: Prepare and Start DocsGPT
5. Start with `flask run` command.
- **Launch DocsGPT**: Follow the instructions in our [DocsGPT Wiki](https://github.com/arc53/DocsGPT/wiki) to start DocsGPT. Make sure to load your documentation.
If you want for bot to stop responding to questions for a specific user or session, just add a label `human-requested` in your conversation.
### Step 2: Get Access Token from Chatwoot
- Go to Chatwoot.
- In your profile settings (located at the bottom left), scroll down and copy the **Access Token**.
### Optional (extra validation)
In `app.py` uncomment lines 12-13 and 71-75
### Step 3: Set Up Chatwoot Extension
in your `.env` file add:
- Navigate to `/extensions/chatwoot`.
- Copy the `.env_sample` file and create a new file named `.env`.
- Fill in the values in the `.env` file as follows:
```env
docsgpt_url=<Docsgpt_API_URL>
chatwoot_url=<Chatwoot_URL>
docsgpt_key=<OpenAI_API_Key or Other_LLM_Key>
chatwoot_token=<Token from Step 2>
```
account_id=(optional) 1
### Step 4: Start the Extension
- Use the command `flask run` to start the extension.
### Step 5: Optional - Extra Validation
- In app.py, uncomment lines 12-13 and 71-75.
- Add the following lines to your .env file:
```account_id=(optional) 1
assignee_id=(optional) 1
```
These Chatwoot values help ensure you respond to the correct widget and handle questions assigned to a specific user.
### Stopping Bot Responses for Specific User or Session
- If you want the bot to stop responding to questions for a specific user or session, add a label `human-requested` in your conversation.
### Additional Notes
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.
- For further details on training on other documentation, refer to our [wiki](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation).

@ -1,22 +1,31 @@
### How to set up react docsGPT widget on your website:
### Setting up the DocsGPT Widget in Your React Project
### Introduction:
The DocsGPT Widget is a powerful tool that allows you to integrate AI-powered documentation assistance into your web applications. This guide will walk you through the installation and usage of the DocsGPT Widget in your React project. Whether you're building a web app or a knowledge base, this widget can enhance your user experience.
### Installation
Got to your project and install a new dependency: `npm install docsgpt`.
First, make sure you have Node.js and npm installed in your project. Then go to your project and install a new dependency: `npm install docsgpt`.
### Usage
Go to your project and in the file where you want to use the widget, import it:
In the file where you want to use the widget, import it and include the CSS file:
```js
import { DocsGPTWidget } from "docsgpt";
import "docsgpt/dist/style.css";
```
Then you can use it like this: `<DocsGPTWidget />`
DocsGPTWidget takes 3 props:
- `apiHost` — URL of your DocsGPT API.
- `selectDocs` — documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`).
- `apiKey` — usually it's empty.
Now, you can use the widget in your component like this :
```jsx
<DocsGPTWidget
apiHost="https://your-docsgpt-api.com"
selectDocs="local/docs.zip"
apiKey=""
/>
```
DocsGPTWidget takes 3 **props**:
1. `apiHost` — The URL of your DocsGPT API.
2. `selectDocs` — The documentation source that you want to use for your widget (e.g. `default` or `local/docs1.zip`).
3. `apiKey` — Usually, it's empty.
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
@ -32,6 +41,7 @@ export default function MyApp({ Component, pageProps }) {
</>
)
}
```
```
For more information about React, refer to this [link here](https://react.dev/learn)

@ -1,4 +1,27 @@
## To customize a main prompt, navigate to `/application/prompt/combine_prompt.txt`
# Customizing the Main Prompt
You can try editing it to see how the model responses.
Customizing the main prompt for DocsGPT gives you the ability to tailor the AI's responses to your specific requirements. By modifying the prompt text, you can achieve more accurate and relevant answers. Here's how you can do it:
1. Navigate to `/application/prompts/combine_prompt.txt`.
2. Open the `combine_prompt.txt` file and modify the prompt text to suit your needs. You can experiment with different phrasings and structures to observe how the model responds. The main prompt serves as guidance to the AI model on how to generate responses.
## Example Prompt Modification
**Original Prompt:**
```markdown
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
Use the following pieces of context to help answer the users question. If it's not relevant to the question, provide friendly responses.
You have access to chat history, and can use it to help answer the question.
When using code examples, use the following format:
(code)
{summaries}
```
Feel free to customize the prompt to align it with your specific use case or the kind of responses you want from the AI. For example, you can focus on specific document types, industries, or topics to get more targeted results.
## Conclusion
Customizing the main prompt for DocsGPT allows you to tailor the AI's responses to your unique requirements. Whether you need in-depth explanations, code examples, or specific insights, you can achieve it by modifying the main prompt. Remember to experiment and fine-tune your prompts to get the best results.

@ -1,43 +1,47 @@
## How to train on other documentation
This AI can use any documentation, but first it needs to be prepared for similarity search.
This AI can utilize any documentation, but it requires preparation for similarity search. Follow these steps to get your documentation ready:
**Step 1: Prepare Your Documentation**
![video-example-of-how-to-do-it](https://d3dg1063dc54p9.cloudfront.net/videos/how-to-vectorise.gif)
Start by going to `/scripts/` folder.
If you open this file, you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
It currently uses OPEN_AI to create the vector store, so make sure your documentation is not too big. Pandas cost me around $3-$4.
It currently uses OPENAI to create the vector store, so make sure your documentation is not too large. Using Pandas cost me around $3-$4.
You can usually find documentation on Github in `docs/` folder for most open-source projects.
You can typically find documentation on GitHub in the `docs/` folder for most open-source projects.
### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
- Name it `inputs/`
- Put all your .rst/.md files in there
- The search is recursive, so you don't need to flatten them
### 1. Find documentation in .rst/.md format and create a folder with it in your scripts directory.
- Name it `inputs/`.
- Put all your .rst/.md files in there.
- The search is recursive, so you don't need to flatten them.
If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script)
If there are no .rst/.md files, convert whatever you find to a .txt file and feed it. (Don't forget to change the extension in the script).
### 2. Create .env file in `scripts/` folder
And write your OpenAI API key inside
`OPENAI_API_KEY=<your-api-key>`
### Step 2: Configure Your OpenAI API Key
1. Create a .env file in the scripts/ folder.
- Add your OpenAI API key inside: OPENAI_API_KEY=<your-api-key>.
### 3. Run scripts/ingest.py
### Step 3: Run the Ingestion Script
`python ingest.py ingest`
It will tell you how much it will cost
It will provide you with the estimated cost.
### 4. Move `index.faiss` and `index.pkl` generated in `scripts/output` to `application/` folder.
### Step 4: Move `index.faiss` and `index.pkl` generated in `scripts/output` to `application/` folder.
### 5. Run web app
Once you run it will use new context that is relevant to your documentation
Make sure you select default in the dropdown in the UI
### Step 5: Run the Web App
Once you run it, it will use new context relevant to your documentation.Make sure you select default in the dropdown in the UI.
## Customization
You can learn more about options while running ingest.py by running:
- Make sure you select 'default' from the dropdown in the UI.
## Customization
You can learn more about options while running ingest.py by executing:
`python ingest.py --help`
| Options | |
|:--------------------------------:|:------------------------------------------------------------------------------------------------------------------------------:|

@ -1,36 +1,48 @@
Fortunately, there are many providers for LLM's and some of them can even be run locally
# Setting Up Local Language Models for Your App
There are two models used in the app:
1. Embeddings.
2. Text generation.
Your app relies on two essential models: Embeddings and Text Generation. While OpenAI's default models work seamlessly, you have the flexibility to switch providers or even run the models locally.
By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple!
## Step 1: Configure Environment Variables
### Go to .env file or set environment variables:
Navigate to the `.env` file or set the following environment variables:
`LLM_NAME=<your Text generation>`
```env
LLM_NAME=<your Text Generation model>
API_KEY=<API key for Text Generation>
EMBEDDINGS_NAME=<LLM for Embeddings>
EMBEDDINGS_KEY=<API key for Embeddings>
VITE_API_STREAMING=<true or false>
```
`API_KEY=<api_key for Text generation>`
You can omit the keys if users provide their own. Ensure you set `LLM_NAME` and `EMBEDDINGS_NAME`.
`EMBEDDINGS_NAME=<llm for embeddings>`
## Step 2: Choose Your Models
`EMBEDDINGS_KEY=<api_key for embeddings>`
**Options for `LLM_NAME`:**
- openai ([More details](https://platform.openai.com/docs/models))
- anthropic ([More details](https://docs.anthropic.com/claude/reference/selecting-a-model))
- manifest ([More details](https://python.langchain.com/docs/integrations/llms/manifest))
- cohere ([More details](https://docs.cohere.com/docs/llmu))
- llama.cpp ([More details](https://python.langchain.com/docs/integrations/llms/llamacpp))
- huggingface (Arc53/DocsGPT-7B by default)
- sagemaker ([Mode details](https://aws.amazon.com/sagemaker/))
`VITE_API_STREAMING=<true or false (true if using openai, false for all others)>`
You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`.
Note: for huggingface you can choose any model inside application/llm/huggingface.py or pass llm_name on init, loads
Options:
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon, llama.cpp)
EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium)
**Options for `EMBEDDINGS_NAME`:**
- openai_text-embedding-ada-002
- huggingface_sentence-transformers/all-mpnet-base-v2
- huggingface_hkunlp/instructor-large
- cohere_medium
If using Llama, set the `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2` and be sure to download [this model](https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf) into the `models/` folder: `https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf`.
If you want to be completely local, set `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2`.
Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choose option 1 when prompted. You do not need to manually add the DocsGPT model mentioned above to your `models/` folder if you use `setup.sh`, as the script will manage that step for you.
For llama.cpp Download the required model and place it in the `models/` folder.
That's it!
Alternatively, for local Llama setup, run `setup.sh` and choose option 1. The script handles the DocsGPT model addition.
### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and don't want anything to leave your premises.
## Step 3: Local Hosting for Privacy
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
If working with sensitive data, host everything locally by setting `LLM_NAME`, llama.cpp or huggingface, use any model available on Hugging Face, for llama.cpp you need to convert it into gguf format.
That's it! Your app is now configured for local and private hosting, ensuring optimal security for critical data.

@ -1,3 +1,5 @@
# Avoiding hallucinations
If your AI uses external knowledge and is not explicit enough, it is ok, because we try to make DocsGPT friendly.
But if you want to adjust it, here is a simple way:-

@ -25,8 +25,6 @@ DocsGPT 🦖 is an innovative open-source tool designed to simplify the retrieva
Try it yourself: [https://docsgpt.arc53.com/](https://docsgpt.arc53.com/)
### [🎉 Join the Hacktoberfest with DocsGPT and Contribute to Earn a Free T-shirt!](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
<Cards
num={3}
children={Object.keys(allGuides).map((key, i) => (

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

@ -1,7 +1,7 @@
# DocsGPT react widget
THis widget will allow you to embed a DocsGPT assistant in your react app.
This widget will allow you to embed a DocsGPT assistant in your React app.
## Installation

@ -2,7 +2,7 @@ import Ne, { useState as ke, useRef as ur, useEffect as Pe } from "react";
var ne = { exports: {} }, Y = {};
/**
* @license React
* react-jsx-runtime.production.min.js
* react-jsx-runtime.development.js
*
* Copyright (c) Facebook, Inc. and its affiliates.
*
@ -11,35 +11,7 @@ var ne = { exports: {} }, Y = {};
*/
var Ce;
function cr() {
if (Ce)
return Y;
Ce = 1;
var N = Ne, w = Symbol.for("react.element"), C = Symbol.for("react.fragment"), u = Object.prototype.hasOwnProperty, E = N.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ReactCurrentOwner, S = { key: !0, ref: !0, __self: !0, __source: !0 };
function T(b, d, v) {
var m, h = {}, x = null, p = null;
v !== void 0 && (x = "" + v), d.key !== void 0 && (x = "" + d.key), d.ref !== void 0 && (p = d.ref);
for (m in d)
u.call(d, m) && !S.hasOwnProperty(m) && (h[m] = d[m]);
if (b && b.defaultProps)
for (m in d = b.defaultProps, d)
h[m] === void 0 && (h[m] = d[m]);
return { $$typeof: w, type: b, key: x, ref: p, props: h, _owner: E.current };
}
return Y.Fragment = C, Y.jsx = T, Y.jsxs = T, Y;
}
var L = {};
/**
* @license React
* react-jsx-runtime.development.js
*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
var Oe;
function fr() {
return Oe || (Oe = 1, process.env.NODE_ENV !== "production" && function() {
return Ce || (Ce = 1, process.env.NODE_ENV !== "production" && function() {
var N = Ne, w = Symbol.for("react.element"), C = Symbol.for("react.portal"), u = Symbol.for("react.fragment"), E = Symbol.for("react.strict_mode"), S = Symbol.for("react.profiler"), T = Symbol.for("react.provider"), b = Symbol.for("react.context"), d = Symbol.for("react.forward_ref"), v = Symbol.for("react.suspense"), m = Symbol.for("react.suspense_list"), h = Symbol.for("react.memo"), x = Symbol.for("react.lazy"), p = Symbol.for("react.offscreen"), R = Symbol.iterator, j = "@@iterator";
function J(e) {
if (e === null || typeof e != "object")
@ -366,7 +338,7 @@ function fr() {
}
function ye(e) {
if (Je(e))
return g("The provided key is an unsupported type %s. This value must be coerced to a string before using it here.", qe(e)), ge(e);
return g("The provided key is an unsupported type %s. This value must be coerced to a string before before using it here.", qe(e)), ge(e);
}
var W = O.ReactCurrentOwner, Be = {
key: !0,
@ -621,10 +593,38 @@ Check the top-level render call using <` + t + ">.");
return Se(e, r, t, !1);
}
var sr = or, lr = ir;
L.Fragment = u, L.jsx = sr, L.jsxs = lr;
}()), L;
Y.Fragment = u, Y.jsx = sr, Y.jsxs = lr;
}()), Y;
}
var L = {};
/**
* @license React
* react-jsx-runtime.production.min.js
*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
var Oe;
function fr() {
if (Oe)
return L;
Oe = 1;
var N = Ne, w = Symbol.for("react.element"), C = Symbol.for("react.fragment"), u = Object.prototype.hasOwnProperty, E = N.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ReactCurrentOwner, S = { key: !0, ref: !0, __self: !0, __source: !0 };
function T(b, d, v) {
var m, h = {}, x = null, p = null;
v !== void 0 && (x = "" + v), d.key !== void 0 && (x = "" + d.key), d.ref !== void 0 && (p = d.ref);
for (m in d)
u.call(d, m) && !S.hasOwnProperty(m) && (h[m] = d[m]);
if (b && b.defaultProps)
for (m in d = b.defaultProps, d)
h[m] === void 0 && (h[m] = d[m]);
return { $$typeof: w, type: b, key: x, ref: p, props: h, _owner: E.current };
}
return L.Fragment = C, L.jsx = T, L.jsxs = T, L;
}
process.env.NODE_ENV === "production" ? ne.exports = cr() : ne.exports = fr();
process.env.NODE_ENV === "production" ? ne.exports = fr() : ne.exports = cr();
var l = ne.exports;
function dr({
question: N = "",

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -0,0 +1,3 @@
/// <reference types="react" />
declare function App(): JSX.Element;
export default App;

@ -0,0 +1 @@
export { DocsGPTWidget } from "./DocsGPTWidget";

@ -1 +0,0 @@
/// <reference types="vite/client" />

File diff suppressed because it is too large Load Diff

@ -36,12 +36,12 @@
"devDependencies": {
"@types/react": "^18.0.26",
"@types/react-dom": "^18.0.9",
"@vitejs/plugin-react-swc": "^3.0.0",
"@vitejs/plugin-react-swc": "^3.5.0",
"autoprefixer": "^10.4.13",
"postcss": "^8.4.31",
"typescript": "^4.9.3",
"vite": "^4.0.0",
"vite-plugin-dts": "^1.7.1"
"vite": "^5.0.12",
"vite-plugin-dts": "^3.7.0"
},
"repository": {
"type": "git",

File diff suppressed because it is too large Load Diff

@ -28,7 +28,8 @@
"react-markdown": "^8.0.7",
"react-redux": "^8.0.5",
"react-router-dom": "^6.8.1",
"react-syntax-highlighter": "^15.5.0"
"react-syntax-highlighter": "^15.5.0",
"remark-gfm": "^3.0.0"
},
"devDependencies": {
"@types/react": "^18.0.27",
@ -36,7 +37,7 @@
"@types/react-syntax-highlighter": "^15.5.6",
"@typescript-eslint/eslint-plugin": "^5.51.0",
"@typescript-eslint/parser": "^5.51.0",
"@vitejs/plugin-react": "^3.1.0",
"@vitejs/plugin-react": "^4.2.1",
"autoprefixer": "^10.4.13",
"eslint": "^8.33.0",
"eslint-config-prettier": "^8.6.0",
@ -54,7 +55,7 @@
"prettier-plugin-tailwindcss": "^0.2.2",
"tailwindcss": "^3.2.4",
"typescript": "^4.9.5",
"vite": "^4.1.5",
"vite-plugin-svgr": "^2.4.0"
"vite": "^5.0.12",
"vite-plugin-svgr": "^4.2.0"
}
}

@ -0,0 +1,7 @@
<svg xmlns="http://www.w3.org/2000/svg" width="33" height="33" viewBox="0 0 33 33" fill="none">
<path d="M8.25 13.75V11C8.25 6.44875 9.625 2.75 16.5 2.75C23.375 2.75 24.75 6.44875 24.75 11V13.75" stroke="#ECECF1" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M23.375 30.25H9.625C4.125 30.25 2.75 28.875 2.75 23.375V20.625C2.75 15.125 4.125 13.75 9.625 13.75H23.375C28.875 13.75 30.25 15.125 30.25 20.625V23.375C30.25 28.875 28.875 30.25 23.375 30.25Z" stroke="#ECECF1" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M21.9951 22H22.0075" stroke="#ECECF1" stroke-width="3" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M16.4938 22H16.5061" stroke="#ECECF1" stroke-width="3" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M10.9924 22H11.0048" stroke="#ECECF1" stroke-width="3" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 919 B

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="36" height="36" viewBox="0 0 36 36" fill="none">
<path d="M12.75 28.4551H12C6 28.4551 3 26.9551 3 19.4551V11.9551C3 5.95508 6 2.95508 12 2.95508H24C30 2.95508 33 5.95508 33 11.9551V19.4551C33 25.4551 30 28.4551 24 28.4551H23.25C22.785 28.4551 22.335 28.6801 22.05 29.0551L19.8 32.0551C18.81 33.3751 17.19 33.3751 16.2 32.0551L13.95 29.0551C13.71 28.7251 13.17 28.4551 12.75 28.4551Z" stroke="#ECECF1" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 13.05L9 16.05L12 19.05" stroke="#ECECF1" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M24 13.05L27 16.05L24 19.05" stroke="#ECECF1" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M19.5 12.5549L16.5 19.545" stroke="#ECECF1" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 980 B

@ -1,6 +1,10 @@
<svg width="36" height="36" viewBox="0 0 36 36" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12.75 28.455H12C6 28.455 3 26.955 3 19.455V11.955C3 5.95496 6 2.95496 12 2.95496H24C30 2.95496 33 5.95496 33 11.955V19.455C33 25.455 30 28.455 24 28.455H23.25C22.785 28.455 22.335 28.68 22.05 29.055L19.8 32.055C18.81 33.375 17.19 33.375 16.2 32.055L13.95 29.055C13.71 28.725 13.17 28.455 12.75 28.455Z" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 13.05L9 16.05L12 19.05" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M24 13.05L27 16.05L24 19.05" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M19.5 12.5549L16.5 19.545" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<g id="vuesax/linear/message-programming">
<g id="message-programming">
<path id="Vector" d="M12.75 28.455H12C6 28.455 3 26.955 3 19.455V11.955C3 5.95496 6 2.95496 12 2.95496H24C30 2.95496 33 5.95496 33 11.955V19.455C33 25.455 30 28.455 24 28.455H23.25C22.785 28.455 22.335 28.68 22.05 29.055L19.8 32.055C18.81 33.375 17.19 33.375 16.2 32.055L13.95 29.055C13.71 28.725 13.17 28.455 12.75 28.455Z" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path id="Vector_2" d="M12 13.05L9 16.05L12 19.05" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path id="Vector_3" d="M24 13.05L27 16.05L24 19.05" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path id="Vector_4" d="M19.5 12.555L16.5 19.545" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 951 B

After

Width:  |  Height:  |  Size: 1.1 KiB

@ -0,0 +1,5 @@
<svg xmlns="http://www.w3.org/2000/svg" width="29" height="29" viewBox="0 0 29 29" fill="none">
<path d="M10.2708 22.9584H9.66663C4.83329 22.9584 2.41663 21.7501 2.41663 15.7084V9.66675C2.41663 4.83341 4.83329 2.41675 9.66663 2.41675H19.3333C24.1666 2.41675 26.5833 4.83341 26.5833 9.66675V15.7084C26.5833 20.5417 24.1666 22.9584 19.3333 22.9584H18.7291C18.3545 22.9584 17.992 23.1397 17.7625 23.4417L15.95 25.8584C15.1525 26.9217 13.8475 26.9217 13.05 25.8584L11.2375 23.4417C11.0441 23.1759 10.597 22.9584 10.2708 22.9584Z" stroke="#ECECF1" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M8.45837 9.66675H20.5417" stroke="#ECECF1" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M8.45837 15.7083H15.7084" stroke="#ECECF1" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 880 B

@ -1,13 +1,14 @@
//TODO - Add hyperlinks to text
//TODO - Styling
import DocsGPT3 from './assets/cute_docsgpt3.svg';
export default function About() {
return (
<div className="mx-5 grid min-h-screen md:mx-36">
<article className="place-items-left mx-auto my-auto mt-20 flex w-full max-w-6xl flex-col gap-6 rounded-3xl bg-gray-100 p-6 text-jet lg:p-10 xl:p-16">
<article className="place-items-left mx-auto my-auto flex w-full max-w-6xl flex-col gap-4 rounded-3xl bg-gray-100 dark:bg-gun-metal p-6 text-jet dark:text-bright-gray lg:p-6 xl:p-10">
<div className="flex items-center">
<p className="mr-2 text-3xl">About DocsGPT</p>
<p className="text-[21px]">🦖</p>
<img className="h14 mb-2" src={DocsGPT3} alt="DocsGPT" />
</div>
<p className="mt-4">
Find the information in your documentation through AI-powered
@ -30,23 +31,23 @@ export default function About() {
</p>
<p className="mt-4 ml-2">
1. Navigate to{' '}
<span className="bg-gray-200 italic"> /application</span> folder
<span className="bg-gray-200 dark:bg-outer-space italic"> /application</span> folder
</p>
<p className="mt-4 ml-2">
2. Install dependencies from{' '}
<span className="bg-gray-200 italic">
<span className="bg-gray-200 dark:bg-outer-space italic">
pip install -r requirements.txt
</span>
</p>
<p className="mt-4 ml-2">
3. Prepare a <span className="bg-gray-200 italic">.env</span> file.
Copy <span className="bg-gray-200 italic">.env_sample</span> and
create <span className="bg-gray-200 italic">.env</span> with your
3. Prepare a <span className="bg-gray-200 dark:bg-outer-space italic">.env</span> file.
Copy <span className="bg-gray-200 dark:bg-outer-space italic">.env_sample</span> and
create <span className="bg-gray-200 dark:bg-outer-space italic">.env</span> with your
OpenAI API token
</p>
<p className="mt-4 ml-2">
4. Run the app with{' '}
<span className="bg-gray-200 italic">python app.py</span>
<span className="bg-gray-200 dark:bg-outer-space italic">python app.py</span>
</p>
</div>
@ -54,7 +55,7 @@ export default function About() {
Currently It uses{' '}
<span className="text-blue-950 font-medium">DocsGPT</span>{' '}
documentation, so it will respond to information relevant to{' '}
<span className="text-blue-950 font-medium">DocsGPT</span> . If you
<span className="text-blue-950 font-medium">DocsGPT</span>. If you
want to train it on different documentation - please follow
<a
className="text-blue-500"

@ -2,18 +2,19 @@ import { Routes, Route } from 'react-router-dom';
import Navigation from './Navigation';
import Conversation from './conversation/Conversation';
import About from './About';
import PageNotFound from './PageNotFound';
import { inject } from '@vercel/analytics';
import { useMediaQuery } from './hooks';
import { useState } from 'react';
import { useState,useEffect } from 'react';
import Setting from './Setting';
inject();
export default function App() {
const { isMobile } = useMediaQuery();
const [navOpen, setNavOpen] = useState(!isMobile);
return (
<div className="min-h-full min-w-full">
<div className="min-h-full min-w-full dark:bg-raisin-black">
<Navigation navOpen={navOpen} setNavOpen={setNavOpen} />
<div
className={`transition-all duration-200 ${
@ -25,6 +26,8 @@ export default function App() {
<Routes>
<Route path="/" element={<Conversation />} />
<Route path="/about" element={<About />} />
<Route path="*" element={<PageNotFound />} />
<Route path="/settings" element={<Setting />} />
</Routes>
</div>
</div>

@ -1,18 +0,0 @@
export default function Avatar({
avatar,
size,
className,
}: {
avatar: string;
size?: 'SMALL' | 'MEDIUM' | 'LARGE';
className: string;
}) {
const styles = {
transform: 'scale(-1, 1)',
};
return (
<div style={styles} className={className}>
{avatar}
</div>
);
}

@ -1,32 +1,80 @@
import { useDarkTheme, useMediaQuery } from './hooks';
import DocsGPT3 from './assets/cute_docsgpt3.svg';
export default function Hero({ className = '' }: { className?: string }) {
// const isMobile = window.innerWidth <= 768;
const { isMobile } = useMediaQuery();
const [isDarkTheme] = useDarkTheme()
return (
<div className={`mt-14 mb-12 flex flex-col `}>
<div className="mb-10 flex items-center justify-center ">
<div className={`mt-14 ${isMobile ? 'mb-2' : 'mb-12'}flex flex-col text-black-1000 dark:text-bright-gray`}>
<div className=" mb-2 flex items-center justify-center sm:mb-10">
<p className="mr-2 text-4xl font-semibold">DocsGPT</p>
<p className="text-[27px]">🦖</p>
<img className="mb-2 h-14" src={DocsGPT3} alt="DocsGPT" />
</div>
<p className="mb-3 text-center leading-6 text-black-1000">
Welcome to DocsGPT, your technical documentation assistant!
</p>
<p className="mb-3 text-center leading-6 text-black-1000">
Enter a query related to the information in the documentation you
selected to receive
<br /> and we will provide you with the most relevant answers.
</p>
<p className="mb-3 text-center leading-6 text-black-1000">
Start by entering your query in the input field below and we will do the
rest!
</p>
<div className="sections mt-1 flex flex-wrap items-center justify-center gap-1 sm:gap-1 md:gap-0 ">
<div className=" rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/70 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tr-none md:rounded-br-none">
<div className="h-full rounded-[45px] bg-white p-6 md:rounded-tr-none md:rounded-br-none">
<img
src="/message-text.svg"
alt="lock"
className="h-[24px] w-[24px]"
/>
<h2 className="mt-2 mb-3 text-lg font-bold">Chat with Your Data</h2>
<p className="w-[250px] text-xs text-gray-500">
{isMobile ? (
<p className="mb-3 text-center leading-6">
Welcome to <span className="font-bold ">DocsGPT</span>, your technical
documentation assistant! Start by entering your query in the input
field below, and we&apos;ll provide you with the most relevant
answers.
</p>
) : (
<>
<p className="mb-3 text-center leading-6">
Welcome to DocsGPT, your technical documentation assistant!
</p>
<p className="mb-3 text-center leading-6">
Enter a query related to the information in the documentation you
selected to receive
<br /> and we will provide you with the most relevant answers.
</p>
<p className="mb-3 text-center leading-6">
Start by entering your query in the input field below and we will do
the rest!
</p>
</>
)}
<div
className={`sections ${isMobile ? '' : 'mt-1'
} flex flex-wrap items-center justify-center gap-2 sm:gap-1 md:gap-0`}
>
{/* first */}
<div className="h-auto md:h-60 rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/70 dark:from-[#D16FF8] via-[#3B82F6] dark:via-[#48E6E0] to-[#9333EA]/50 dark:to-[#C85EF6] p-1 md:rounded-tr-none md:rounded-br-none">
<div
className={`h-full rounded-[45px] bg-white dark:bg-dark-charcoal p-${isMobile ? '3.5' : '6 py-8'
} md:rounded-tr-none md:rounded-br-none`}
>
{/* Add Mobile check here */}
{isMobile ? (
<div className="flex justify-center">
<img
src={isDarkTheme ? "/message-text-dark.svg" : "/message-text.svg"}
alt="lock"
className="h-[24px] w-[24px] "
/>
<h2 className="mb-0 pl-1 text-lg font-bold">
Chat with Your Data
</h2>
</div>
) : (
<>
<img
src={isDarkTheme ? "/message-text-dark.svg" : "/message-text.svg"}
alt="lock"
className="h-[24px] w-[24px]"
/>
<h2 className="mt-2 mb-3 text-lg font-bold">
Chat with Your Data
</h2>
</>
)}
<p
className={
isMobile
? `w-[250px] text-center text-xs text-gray-500 dark:text-bright-gray`
: `w-[250px] text-xs text-gray-500 dark:text-bright-gray`
}
>
DocsGPT will use your data to answer questions. Whether its
documentation, source code, or Microsoft files, DocsGPT allows you
to have interactive conversations and find answers based on the
@ -34,12 +82,35 @@ export default function Hero({ className = '' }: { className?: string }) {
</p>
</div>
</div>
<div className=" rounded-[50px] bg-gradient-to-r from-[#6EE7B7]/70 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-none md:py-1 md:px-0">
<div className="rounded-[45px] bg-white px-6 py-4 md:rounded-none">
<img src="/lock.svg" alt="lock" className="h-[24px] w-[24px]" />
<h2 className="mt-2 mb-3 text-lg font-bold">Secure Data Storage</h2>
<p className=" w-[250px] text-xs text-gray-500">
{/* second */}
<div className="h-auto md:h-60 rounded-[50px] bg-gradient-to-r from-[#6EE7B7]/70 dark:from-[#D16FF8] via-[#3B82F6] dark:via-[#48E6E0] to-[#9333EA]/50 dark:to-[#C85EF6] p-1 md:rounded-none md:py-1 md:px-0">
<div
className={`h-full rounded-[45px] bg-white dark:bg-dark-charcoal p-${isMobile ? '3.5' : '6 py-6'
} md:rounded-none`}
>
{/* Add Mobile check here */}
{isMobile ? (
<div className="flex justify-center ">
<img src={isDarkTheme ? "/lock-dark.svg" : "/lock.svg"} alt="lock" className="h-[24px] w-[24px]" />
<h2 className="mb-0 pl-1 text-lg font-bold">
Secure Data Storage
</h2>
</div>
) : (
<>
<img src={isDarkTheme ? "/lock-dark.svg" : "/lock.svg"} alt="lock" className="h-[24px] w-[24px]" />
<h2 className="mt-2 mb-3 text-lg font-bold">
Secure Data Storage
</h2>
</>
)}
<p
className={
isMobile
? `w-[250px] text-center text-xs text-gray-500 dark:text-bright-gray`
: `w-[250px] text-xs text-gray-500 dark:text-bright-gray`
}
>
The security of your data is our top priority. DocsGPT ensures the
utmost protection for your sensitive information. With secure data
storage and privacy measures in place, you can trust that your
@ -47,15 +118,43 @@ export default function Hero({ className = '' }: { className?: string }) {
</p>
</div>
</div>
<div className=" rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/80 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tl-none md:rounded-bl-none">
<div className="rounded-[45px] bg-white p-6 px-6 lg:rounded-tl-none lg:rounded-bl-none">
<img
src="/message-programming.svg"
alt="lock"
className="h-[24px] w-[24px]"
/>
<h2 className="mt-2 mb-3 text-lg font-bold">Open Source Code</h2>
<p className=" w-[250px] text-xs text-gray-500">
{/* third */}
<div className="h-auto md:h-60 rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/70 dark:from-[#D16FF8] via-[#3B82F6] dark:via-[#48E6E0] to-[#9333EA]/50 dark:to-[#C85EF6] p-1 md:rounded-tl-none md:rounded-bl-none ">
<div
className={`h-full firefox rounded-[45px] bg-white dark:bg-dark-charcoal p-${isMobile ? '3.5' : '6 px-6 '
} lg:rounded-tl-none lg:rounded-bl-none`}
>
{/* Add Mobile check here */}
{isMobile ? (
<div className="flex justify-center">
<img
src={isDarkTheme ? "message-programming-dark.svg" : "/message-programming.svg"}
alt="lock"
className="h-[24px] w-[24px]"
/>
<h2 className="mb-0 pl-1 text-lg font-bold">
Open Source Code
</h2>
</div>
) : (
<>
<img
src={isDarkTheme ? "/message-programming-dark.svg" : "/message-programming.svg"}
alt="lock"
className="h-[24px] w-[24px]"
/>
<h2 className="mt-2 mb-3 text-lg font-bold">
Open Source Code
</h2>
</>
)}
<p
className={
isMobile
? `w-[250px] text-center text-xs text-gray-500 dark:text-bright-gray`
: `w-[250px] text-xs text-gray-500 dark:text-bright-gray`
}
>
DocsGPT is built on open source principles, promoting transparency
and collaboration. The source code is freely available, enabling
developers to contribute, enhance, and customize the app to meet

@ -1,19 +1,27 @@
import { useEffect, useRef, useState } from 'react';
import { useDispatch, useSelector } from 'react-redux';
import { NavLink, useNavigate } from 'react-router-dom';
import Arrow1 from './assets/arrow.svg';
import DocsGPT3 from './assets/cute_docsgpt3.svg';
import Documentation from './assets/documentation.svg';
import DocumentationDark from './assets/documentation-dark.svg';
import Discord from './assets/discord.svg';
import DiscordDark from './assets/discord-dark.svg';
import Arrow2 from './assets/dropdown-arrow.svg';
import Exit from './assets/exit.svg';
import Message from './assets/message.svg';
import Expand from './assets/expand.svg';
import Trash from './assets/trash.svg';
import Github from './assets/github.svg';
import GithubDark from './assets/github-dark.svg';
import Hamburger from './assets/hamburger.svg';
import Key from './assets/key.svg';
import HamburgerDark from './assets/hamburger-dark.svg';
import Info from './assets/info.svg';
import Link from './assets/link.svg';
import Discord from './assets/discord.svg';
import Github from './assets/github.svg';
import InfoDark from './assets/info-dark.svg';
import SettingGear from './assets/settingGear.svg';
import SettingGearDark from './assets/settingGear-dark.svg';
import Add from './assets/add.svg';
import UploadIcon from './assets/upload.svg';
import { ActiveState } from './models/misc';
import APIKeyModal from './preferences/APIKeyModal';
import { useDispatch, useSelector } from 'react-redux';
import {
selectApiKeyStatus,
selectSelectedDocs,
@ -33,12 +41,28 @@ import Upload from './upload/Upload';
import { Doc, getConversations } from './preferences/preferenceApi';
import SelectDocsModal from './preferences/SelectDocsModal';
import ConversationTile from './conversation/ConversationTile';
import { useDarkTheme } from './hooks';
interface NavigationProps {
navOpen: boolean;
setNavOpen: React.Dispatch<React.SetStateAction<boolean>>;
}
const NavImage: React.FC<{ Light: string, Dark: string }> = ({ Light, Dark }) => {
return (
<>
<img
src={Dark}
alt="icon"
className="ml-2 w-5 hidden dark:block "
/>
<img
src={Light}
alt="icon"
className="ml-2 w-5 dark:hidden "
/>
</>
)
}
export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
const dispatch = useDispatch();
const docs = useSelector(selectSourceDocs);
@ -46,7 +70,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
const conversations = useSelector(selectConversations);
const conversationId = useSelector(selectConversationId);
const { isMobile } = useMediaQuery();
const [isDarkTheme] = useDarkTheme();
const [isDocsListOpen, setIsDocsListOpen] = useState(false);
const isApiKeySet = useSelector(selectApiKeyStatus);
@ -63,7 +87,8 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
const navRef = useRef(null);
const apiHost = import.meta.env.VITE_API_HOST || 'https://docsapi.arc53.com';
const embeddingsName =
import.meta.env.VITE_EMBEDDINGS_NAME || 'openai_text-embedding-ada-002';
import.meta.env.VITE_EMBEDDINGS_NAME ||
'huggingface_sentence-transformers/all-mpnet-base-v2';
const navigate = useNavigate();
@ -72,7 +97,6 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
fetchConversations();
}
}, [conversations, dispatch]);
async function fetchConversations() {
return await getConversations()
.then((fetchedConversations) => {
@ -88,12 +112,6 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
method: 'POST',
})
.then(() => {
// remove the image element from the DOM
const imageElement = document.querySelector(
`#img-${id}`,
) as HTMLElement;
const parentElement = imageElement.parentNode as HTMLElement;
parentElement.parentNode?.removeChild(parentElement);
fetchConversations();
})
.catch((error) => console.error(error));
@ -171,50 +189,48 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
*/
useEffect(() => {
if (isMobile) {
setNavOpen(false);
return;
}
setNavOpen(true);
setNavOpen(!isMobile);
}, [isMobile]);
return (
<>
{!navOpen && (
<button
className="duration-25 absolute relative top-3 left-3 z-20 hidden transition-all md:block"
className="duration-25 absolute top-3 left-3 z-20 hidden transition-all md:block"
onClick={() => {
setNavOpen(!navOpen);
}}
>
<img
src={Arrow1}
src={Expand}
alt="menu toggle"
className={`${
!navOpen ? 'rotate-180' : 'rotate-0'
} m-auto w-3 transition-all duration-200`}
className={`${!navOpen ? 'rotate-180' : 'rotate-0'
} m-auto transition-all duration-200`}
/>
</button>
)}
<div
ref={navRef}
className={`${
!navOpen && '-ml-96 md:-ml-[18rem]'
} duration-20 fixed z-20 flex h-full w-72 flex-col border-r-2 bg-gray-50 transition-all`}
className={`${!navOpen && '-ml-96 md:-ml-[18rem]'
} duration-20 fixed top-0 z-20 flex h-full w-72 flex-col border-r-[1px] border-b-0 dark:border-r-purple-taupe bg-white dark:bg-chinese-black transition-all dark:text-white`}
>
<div className={'visible h-16 w-full border-b-2 md:h-12'}>
<div
className={'visible mt-2 flex h-[6vh] w-full justify-between md:h-12'}
>
<div className="my-auto mx-4 flex cursor-pointer gap-1.5">
<img className="mb-2 h-10" src={DocsGPT3} alt="" />
<p className="my-auto text-2xl font-semibold">DocsGPT</p>
</div>
<button
className="float-right mr-5 mt-5 h-5 w-5 md:mt-3"
className="float-right mr-5"
onClick={() => {
setNavOpen(!navOpen);
}}
>
<img
src={Arrow1}
src={Expand}
alt="menu toggle"
className={`${
!navOpen ? 'rotate-180' : 'rotate-0'
} m-auto w-3 transition-all duration-200`}
className={`${!navOpen ? 'rotate-180' : 'rotate-0'
} m-auto transition-all duration-200`}
/>
</button>
</div>
@ -229,159 +245,174 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
);
}}
className={({ isActive }) =>
`${
isActive && conversationId === null ? 'bg-gray-3000' : ''
} my-auto mx-4 mt-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100`
`${isActive ? 'bg-gray-3000 dark:bg-transparent' : ''
} group sticky mx-4 mt-4 flex cursor-pointer gap-2.5 rounded-3xl border border-silver p-3 hover:border-rainy-gray dark:border-purple-taupe dark:text-white dark:hover:bg-transparent hover:bg-gray-3000`
}
>
<img src={Message} className="ml-4 w-5"></img>
<p className="my-auto text-eerie-black">New Chat</p>
<img
src={Add}
alt="new"
className="opacity-80 group-hover:opacity-100"
/>
<p className=" text-sm text-dove-gray group-hover:text-neutral-600 dark:text-chinese-silver dark:group-hover:text-bright-gray">
New Chat
</p>
</NavLink>
<div className="conversations-container max-h-[25rem] overflow-y-auto">
{conversations
? conversations.map((conversation, index) => (
<ConversationTile
key={index}
conversation={conversation}
selectConversation={(id) => handleConversationClick(id)}
onDeleteConversation={(id) => handleDeleteConversation(id)}
onSave={(conversation) =>
updateConversationName(conversation)
}
/>
))
: null}
<div className="mb-auto h-[56vh] overflow-x-hidden dark:text-white overflow-y-scroll">
{conversations && (
<div>
<p className="ml-6 mt-3 text-sm font-semibold">Chats</p>
<div className="conversations-container">
{conversations?.map((conversation) => (
<ConversationTile
key={conversation.id}
conversation={conversation}
selectConversation={(id) => handleConversationClick(id)}
onDeleteConversation={(id) => handleDeleteConversation(id)}
onSave={(conversation) =>
updateConversationName(conversation)
}
/>
))}
</div>
</div>
)}
</div>
<div className="flex-grow border-b-2 border-gray-100"></div>
<div className="flex flex-col-reverse border-b-2">
<div className="relative my-4 flex gap-2 px-2">
<div
className="flex h-12 w-full cursor-pointer justify-between rounded-3xl border-2 bg-white"
onClick={() => setIsDocsListOpen(!isDocsListOpen)}
>
{selectedDocs && (
<p className="my-3 mx-4 overflow-hidden text-ellipsis whitespace-nowrap">
{selectedDocs.name} {selectedDocs.version}
</p>
)}
<img
src={Arrow2}
alt="arrow"
className={`${
isDocsListOpen ? 'rotate-0' : 'rotate-180'
} ml-auto mr-3 w-3 transition-all`}
/>
</div>
<img
className="mt-2 h-9 w-9 hover:cursor-pointer"
src={UploadIcon}
onClick={() => setUploadModalState('ACTIVE')}
></img>
{isDocsListOpen && (
<div className="absolute top-12 left-0 right-6 ml-2 mr-4 max-h-52 overflow-y-scroll bg-white shadow-lg">
{docs ? (
docs.map((doc, index) => {
if (doc.model === embeddingsName) {
return (
<div
key={index}
onClick={() => {
dispatch(setSelectedDocs(doc));
setIsDocsListOpen(false);
}}
className="flex h-10 w-full cursor-pointer items-center justify-between border-x-2 border-b-2 hover:bg-gray-100"
>
<p className="ml-5 flex-1 overflow-hidden overflow-ellipsis whitespace-nowrap py-3">
{doc.name} {doc.version}
</p>
{doc.location === 'local' ? (
<img
src={Exit}
alt="Exit"
className="mr-4 h-3 w-3 cursor-pointer hover:opacity-50"
id={`img-${index}`}
onClick={(event) => {
event.stopPropagation();
handleDeleteClick(index, doc);
}}
/>
) : null}
</div>
);
}
})
) : (
<div className="h-10 w-full cursor-pointer border-x-2 border-b-2 hover:bg-gray-100">
<p className="ml-5 py-3">No default documentation.</p>
</div>
<div className="flex h-auto flex-col justify-end text-eerie-black dark:text-white">
<div className="flex flex-col-reverse border-b-[1px] dark:border-b-purple-taupe">
<div className="relative my-4 flex gap-2 px-2">
<div
className="flex h-12 w-5/6 cursor-pointer justify-between rounded-3xl border-2 dark:border-chinese-silver bg-white dark:bg-chinese-black"
onClick={() => setIsDocsListOpen(!isDocsListOpen)}
>
{selectedDocs && (
<p className="my-3 mx-4 overflow-hidden text-ellipsis whitespace-nowrap">
{selectedDocs.name} {selectedDocs.version}
</p>
)}
<img
src={Arrow2}
alt="arrow"
className={`${!isDocsListOpen ? 'rotate-0' : 'rotate-180'
} ml-auto mr-3 w-3 transition-all`}
/>
</div>
)}
<img
className="mt-2 h-9 w-9 hover:cursor-pointer"
src={UploadIcon}
onClick={() => setUploadModalState('ACTIVE')}
></img>
{isDocsListOpen && (
<div className="absolute top-12 left-0 right-6 z-10 ml-2 mr-4 max-h-52 overflow-y-scroll bg-white dark:bg-chinese-black shadow-lg">
{docs ? (
docs.map((doc, index) => {
if (doc.model === embeddingsName) {
return (
<div
key={index}
onClick={() => {
dispatch(setSelectedDocs(doc));
setIsDocsListOpen(false);
}}
className="flex h-10 w-full cursor-pointer items-center justify-between border-x-2 border-b-[1px] dark:border-purple-taupe hover:bg-gray-100 dark:hover:bg-purple-taupe"
>
<p className="ml-5 flex-1 overflow-hidden overflow-ellipsis whitespace-nowrap py-3">
{doc.name} {doc.version}
</p>
{doc.location === 'local' && (
<img
src={Trash}
alt="Delete"
className="mr-4 h-4 w-4 cursor-pointer hover:opacity-50"
id={`img-${index}`}
onClick={(event) => {
event.stopPropagation();
handleDeleteClick(index, doc);
}}
/>
)}
</div>
);
}
})
) : (
<div className="h-10 w-full cursor-pointer border-b-[1px] dark:border-b-purple-taupe hover:bg-gray-100">
<p className="ml-5 py-3">No default documentation.</p>
</div>
)}
</div>
)}
</div>
<p className="ml-6 mt-3 text-sm font-semibold">Source Docs</p>
</div>
<p className="ml-6 mt-3 font-bold text-jet">Source Docs</p>
</div>
<div className="flex flex-col gap-2 border-b-2 py-2">
<div
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
onClick={() => {
setApiKeyModalState('ACTIVE');
}}
>
<img src={Key} alt="key" className="ml-2 w-6" />
<p className="my-auto text-eerie-black">Reset Key</p>
<div className="flex flex-col gap-2 border-b-[1px] dark:border-b-purple-taupe py-2">
<NavLink
to="/settings"
className={({ isActive }) =>
`my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 dark:hover:bg-purple-taupe ${isActive ? 'bg-gray-3000 dark:bg-transparent' : ''
}`
}
>
<NavImage Light={SettingGear} Dark={SettingGearDark} />
<p className="my-auto text-sm text-eerie-black dark:text-white">Settings</p>
</NavLink>
</div>
</div>
<div className="flex flex-col gap-2 border-b-2 py-2">
<NavLink
to="/about"
className={({ isActive }) =>
`my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 ${
isActive ? 'bg-gray-3000' : ''
}`
}
>
<img src={Info} alt="info" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">About</p>
</NavLink>
<div className="flex flex-col gap-2 border-b-[1.5px] dark:border-b-purple-taupe py-2">
<NavLink
to="/about"
className={({ isActive }) =>
`my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 dark:hover:bg-purple-taupe ${isActive ? 'bg-gray-3000 dark:bg-purple-taupe' : ''
}`
}
>
<NavImage Light={Info} Dark={InfoDark} />
<p className="my-auto text-sm">About</p>
</NavLink>
<a
href="https://docs.docsgpt.co.uk/"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
>
<img src={Link} alt="link" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">Documentation</p>
</a>
<a
href="https://discord.gg/WHJdfbQDR4"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
>
<img src={Discord} alt="link" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">Visit our Discord</p>
</a>
<a
href="https://docs.docsgpt.co.uk/"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 dark:hover:bg-purple-taupe"
>
<NavImage Light={Documentation} Dark={DocumentationDark} />
<p className="my-auto text-sm ">Documentation</p>
</a>
<a
href="https://discord.gg/WHJdfbQDR4"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 dark:hover:bg-purple-taupe"
>
<NavImage Light={Discord} Dark={DiscordDark} />
{/* <img src={isDarkTheme ? DiscordDark : Discord} alt="discord-link" className="ml-2 w-5" /> */}
<p className="my-auto text-sm">
Visit our Discord
</p>
</a>
<a
href="https://github.com/arc53/DocsGPT"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
>
<img src={Github} alt="link" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">Visit our Github</p>
</a>
<a
href="https://github.com/arc53/DocsGPT"
target="_blank"
rel="noreferrer"
className="mt-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 dark:hover:bg-purple-taupe"
>
<NavImage Light={Github} Dark={GithubDark} />
<p className="my-auto text-sm">
Visit our Github
</p>
</a>
</div>
</div>
</div>
<div className="fixed h-16 w-full border-b-2 bg-gray-50 md:hidden">
<div className="fixed z-10 h-16 w-full border-b-2 dark:border-b-purple-taupe bg-gray-50 dark:bg-chinese-black md:hidden">
<button
className="mt-5 ml-6 h-6 w-6 md:hidden"
onClick={() => setNavOpen(true)}
>
<img src={Hamburger} alt="menu toggle" className="w-7" />
<img src={isDarkTheme ? HamburgerDark : Hamburger} alt="menu toggle" className="w-7" />
</button>
</div>
<SelectDocsModal

@ -0,0 +1,15 @@
import { Link } from 'react-router-dom';
export default function PageNotFound() {
return (
<div className="mx-5 grid min-h-screen md:mx-36">
<p className="mx-auto my-auto mt-20 flex w-full max-w-6xl flex-col place-items-center gap-6 rounded-3xl bg-gray-100 p-6 text-jet lg:p-10 xl:p-16">
<h1>404</h1>
<p>The page you are looking for does not exist.</p>
<button className="pointer-cursor mr-4 flex cursor-pointer items-center justify-center rounded-full bg-blue-1000 py-2 px-4 text-white hover:bg-blue-3000">
<Link to="/">Go Back Home</Link>
</button>
</p>
</div>
);
}

@ -0,0 +1,822 @@
import React, { useState, useEffect } from 'react';
import { useSelector, useDispatch } from 'react-redux';
import Arrow2 from './assets/dropdown-arrow.svg';
import ArrowLeft from './assets/arrow-left.svg';
import ArrowRight from './assets/arrow-right.svg';
import Trash from './assets/trash.svg';
import {
selectPrompt,
setPrompt,
selectSourceDocs,
} from './preferences/preferenceSlice';
import { Doc } from './preferences/preferenceApi';
import { useDarkTheme } from './hooks';
import { Light } from 'react-syntax-highlighter';
type PromptProps = {
prompts: { name: string; id: string; type: string }[];
selectedPrompt: { name: string; id: string; type: string };
onSelectPrompt: (name: string, id: string, type: string) => void;
setPrompts: (prompts: { name: string; id: string; type: string }[]) => void;
apiHost: string;
};
const Setting: React.FC = () => {
const tabs = ['General', 'Prompts', 'Documents'];
//const tabs = ['General', 'Prompts', 'Documents', 'Widgets'];
const [activeTab, setActiveTab] = useState('General');
const [prompts, setPrompts] = useState<
{ name: string; id: string; type: string }[]
>([]);
const selectedPrompt = useSelector(selectPrompt);
const [isAddPromptModalOpen, setAddPromptModalOpen] = useState(false);
const documents = useSelector(selectSourceDocs);
const [isAddDocumentModalOpen, setAddDocumentModalOpen] = useState(false);
const dispatch = useDispatch();
const apiHost = import.meta.env.VITE_API_HOST || 'https://docsapi.arc53.com';
const [widgetScreenshot, setWidgetScreenshot] = useState<File | null>(null);
const updateWidgetScreenshot = (screenshot: File | null) => {
setWidgetScreenshot(screenshot);
};
useEffect(() => {
const fetchPrompts = async () => {
try {
const response = await fetch(`${apiHost}/api/get_prompts`);
if (!response.ok) {
throw new Error('Failed to fetch prompts');
}
const promptsData = await response.json();
setPrompts(promptsData);
} catch (error) {
console.error(error);
}
};
fetchPrompts();
}, []);
const onDeletePrompt = (name: string, id: string) => {
setPrompts(prompts.filter((prompt) => prompt.id !== id));
fetch(`${apiHost}/api/delete_prompt`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
// send id in body only
body: JSON.stringify({ id: id }),
})
.then((response) => {
if (!response.ok) {
throw new Error('Failed to delete prompt');
}
})
.catch((error) => {
console.error(error);
});
};
const handleDeleteClick = (index: number, doc: Doc) => {
const docPath = 'indexes/' + 'local' + '/' + doc.name;
fetch(`${apiHost}/api/delete_old?path=${docPath}`, {
method: 'GET',
})
.then(() => {
// remove the image element from the DOM
const imageElement = document.querySelector(
`#img-${index}`,
) as HTMLElement;
const parentElement = imageElement.parentNode as HTMLElement;
parentElement.parentNode?.removeChild(parentElement);
})
.catch((error) => console.error(error));
};
return (
<div className="p-4 pt-20 md:p-12 wa">
<p className="text-2xl font-bold text-eerie-black dark:text-bright-gray">Settings</p>
<div className="mt-6 flex flex-row items-center space-x-4 overflow-x-auto md:space-x-8 ">
<div className="md:hidden">
<button
onClick={() => scrollTabs(-1)}
className="flex h-8 w-8 items-center justify-center rounded-full border-2 border-purple-30 transition-all hover:bg-gray-100"
>
<img src={ArrowLeft} alt="left-arrow" className="h-6 w-6" />
</button>
</div>
<div className="flex flex-nowrap space-x-4 overflow-x-auto md:space-x-8">
{tabs.map((tab, index) => (
<button
key={index}
onClick={() => setActiveTab(tab)}
className={`h-9 rounded-3xl px-4 font-bold ${activeTab === tab
? 'bg-purple-3000 text-purple-30 dark:bg-dark-charcoal'
: 'text-gray-6000'
}`}
>
{tab}
</button>
))}
</div>
<div className="md:hidden">
<button
onClick={() => scrollTabs(1)}
className="flex h-8 w-8 items-center justify-center rounded-full border-2 border-purple-30 hover:bg-gray-100"
>
<img src={ArrowRight} alt="right-arrow" className="h-6 w-6" />
</button>
</div>
</div>
{renderActiveTab()}
{/* {activeTab === 'Widgets' && (
<Widgets
widgetScreenshot={widgetScreenshot}
onWidgetScreenshotChange={updateWidgetScreenshot}
/>
)} */}
</div>
);
function scrollTabs(direction: number) {
const container = document.querySelector('.flex-nowrap');
if (container) {
container.scrollLeft += direction * 100; // Adjust the scroll amount as needed
}
}
function renderActiveTab() {
switch (activeTab) {
case 'General':
return <General />;
case 'Prompts':
return (
<Prompts
prompts={prompts}
selectedPrompt={selectedPrompt}
onSelectPrompt={(name, id, type) =>
dispatch(setPrompt({ name: name, id: id, type: type }))
}
setPrompts={setPrompts}
apiHost={apiHost}
/>
);
case 'Documents':
return (
<Documents
documents={documents}
handleDeleteDocument={handleDeleteClick}
/>
);
case 'Widgets':
return (
<Widgets
widgetScreenshot={widgetScreenshot} // Add this line
onWidgetScreenshotChange={updateWidgetScreenshot} // Add this line
/>
);
default:
return null;
}
}
};
const General: React.FC = () => {
const themes = ['Light', 'Dark'];
const languages = ['English'];
const [isDarkTheme, toggleTheme] = useDarkTheme();
const [selectedTheme, setSelectedTheme] = useState(isDarkTheme ? 'Dark' : 'Light');
const [selectedLanguage, setSelectedLanguage] = useState(languages[0]);
return (
<div className="mt-[59px]">
<div className="mb-4">
<p className="font-bold text-jet dark:text-bright-gray">Select Theme</p>
<Dropdown
options={themes}
selectedValue={selectedTheme}
onSelect={(option:string)=>{
setSelectedTheme(option);
option !==selectedTheme && toggleTheme();
}}
/>
</div>
<div>
<p className="font-bold text-jet dark:text-bright-gray">Select Language</p>
<Dropdown
options={languages}
selectedValue={selectedLanguage}
onSelect={setSelectedLanguage}
/>
</div>
</div>
);
};
export default Setting;
const Prompts: React.FC<PromptProps> = ({
prompts,
selectedPrompt,
onSelectPrompt,
setPrompts,
apiHost,
}) => {
const handleSelectPrompt = ({
name,
id,
type,
}: {
name: string;
id: string;
type: string;
}) => {
setNewPromptName(name);
onSelectPrompt(name, id, type);
};
const [newPromptName, setNewPromptName] = useState(selectedPrompt.name);
const [newPromptContent, setNewPromptContent] = useState('');
const handleAddPrompt = async () => {
try {
const response = await fetch(`${apiHost}/api/create_prompt`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: newPromptName,
content: newPromptContent,
}),
});
if (!response.ok) {
throw new Error('Failed to add prompt');
}
const newPrompt = await response.json();
if (setPrompts) {
setPrompts([
...prompts,
{ name: newPromptName, id: newPrompt.id, type: 'private' },
]);
}
onSelectPrompt(newPromptName, newPrompt.id, newPromptContent);
setNewPromptName(newPromptName);
} catch (error) {
console.error(error);
}
};
const handleDeletePrompt = () => {
setPrompts(prompts.filter((prompt) => prompt.id !== selectedPrompt.id));
console.log('selectedPrompt.id', selectedPrompt.id);
fetch(`${apiHost}/api/delete_prompt`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ id: selectedPrompt.id }),
})
.then((response) => {
if (!response.ok) {
throw new Error('Failed to delete prompt');
}
// get 1st prompt and set it as selected
if (prompts.length > 0) {
onSelectPrompt(prompts[0].name, prompts[0].id, prompts[0].type);
setNewPromptName(prompts[0].name);
}
})
.catch((error) => {
console.error(error);
});
};
useEffect(() => {
const fetchPromptContent = async () => {
console.log('fetching prompt content');
try {
const response = await fetch(
`${apiHost}/api/get_single_prompt?id=${selectedPrompt.id}`,
{
method: 'GET',
headers: {
'Content-Type': 'application/json',
},
},
);
if (!response.ok) {
throw new Error('Failed to fetch prompt content');
}
const promptContent = await response.json();
setNewPromptContent(promptContent.content);
} catch (error) {
console.error(error);
}
};
fetchPromptContent();
}, [selectedPrompt]);
const handleSaveChanges = () => {
fetch(`${apiHost}/api/update_prompt`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
id: selectedPrompt.id,
name: newPromptName,
content: newPromptContent,
}),
})
.then((response) => {
if (!response.ok) {
throw new Error('Failed to update prompt');
}
onSelectPrompt(newPromptName, selectedPrompt.id, selectedPrompt.type);
setNewPromptName(newPromptName);
})
.catch((error) => {
console.error(error);
});
};
return (
<div className="mt-[59px]">
<div className="mb-4">
<p className="font-semibold dark:text-bright-gray">Active Prompt</p>
<DropdownPrompt
options={prompts}
selectedValue={selectedPrompt.name}
onSelect={handleSelectPrompt}
/>
</div>
<div className="mb-4">
<p className='dark:text-bright-gray'>Prompt name </p>{' '}
<p className="mb-2 text-xs italic text-eerie-black dark:text-bright-gray">
start by editing name
</p>
<input
type="text"
value={newPromptName}
placeholder="Active Prompt Name"
className="w-full rounded-lg border-2 p-2 dark:border-chinese-silver dark:bg-transparent dark:text-white"
onChange={(e) => setNewPromptName(e.target.value)}
/>
</div>
<div className="mb-4">
<p className="mb-2 dark:text-bright-gray">Prompt content</p>
<textarea
className="h-32 w-full rounded-lg border-2 p-2 dark:border-chinese-silver dark:text-white dark:bg-transparent"
value={newPromptContent}
onChange={(e) => setNewPromptContent(e.target.value)}
placeholder="Active prompt contents"
/>
</div>
<div className="flex justify-between">
<button
className={`rounded-lg bg-green-500 px-4 py-2 font-bold text-white transition-all hover:bg-green-700 ${newPromptName === selectedPrompt.name
? 'cursor-not-allowed opacity-50'
: ''
}`}
onClick={handleAddPrompt}
disabled={newPromptName === selectedPrompt.name}
>
Add New Prompt
</button>
<button
className={`rounded-lg bg-red-500 px-4 py-2 font-bold text-white transition-all hover:bg-red-700 ${selectedPrompt.type === 'public'
? 'cursor-not-allowed opacity-50'
: ''
}`}
onClick={handleDeletePrompt}
disabled={selectedPrompt.type === 'public'}
>
Delete Prompt
</button>
<button
className={`rounded-lg bg-blue-500 px-4 py-2 font-bold text-white transition-all hover:bg-blue-700 ${selectedPrompt.type === 'public'
? 'cursor-not-allowed opacity-50'
: ''
}`}
onClick={handleSaveChanges}
disabled={selectedPrompt.type === 'public'}
>
Save Changes
</button>
</div>
</div>
);
};
function DropdownPrompt({
options,
selectedValue,
onSelect,
}: {
options: { name: string; id: string; type: string }[];
selectedValue: string;
onSelect: (value: { name: string; id: string; type: string }) => void;
}) {
const [isOpen, setIsOpen] = useState(false);
return (
<div className="relative mt-2 w-32">
<button
onClick={() => setIsOpen(!isOpen)}
className="flex w-full cursor-pointer items-center rounded-xl border-2 dark:border-chinese-silver bg-white p-3 dark:bg-transparent"
>
<span className="flex-1 overflow-hidden text-ellipsis dark:text-bright-gray">
{selectedValue}
</span>
<img
src={Arrow2}
alt="arrow"
className={`transform ${isOpen ? 'rotate-180' : 'rotate-0'
} h-3 w-3 transition-transform`}
/>
</button>
{isOpen && (
<div className="absolute left-0 right-0 z-50 -mt-3 rounded-b-xl border-2 dark:border-chinese-silver bg-white dark:bg-dark-charcoal shadow-lg">
{options.map((option, index) => (
<div
key={index}
className="flex cursor-pointer items-center justify-between hover:bg-gray-100 dark:hover:bg-purple-taupe dark:text-bright-gray "
>
<span
onClick={() => {
onSelect(option);
setIsOpen(false);
}}
className="ml-2 flex-1 overflow-hidden overflow-ellipsis whitespace-nowrap py-3"
>
{option.name}
</span>
</div>
))}
</div>
)}
</div>
);
}
function Dropdown({
options,
selectedValue,
onSelect,
showDelete,
onDelete,
}: {
options: string[];
selectedValue: string;
onSelect: (value: string) => void;
showDelete?: boolean; // optional
onDelete?: (value: string) => void; // optional
}) {
const [isOpen, setIsOpen] = useState(false);
return (
<div className="relative mt-2 w-32">
<button
onClick={() => setIsOpen(!isOpen)}
className="flex w-full cursor-pointer items-center rounded-xl border-2 dark:border-chinese-silver bg-white p-3 dark:bg-transparent"
>
<span className="flex-1 overflow-hidden text-ellipsis dark:text-bright-gray">
{selectedValue}
</span>
<img
src={Arrow2}
alt="arrow"
className={`transform ${isOpen ? 'rotate-180' : 'rotate-0'
} h-3 w-3 transition-transform`}
/>
</button>
{isOpen && (
<div className="absolute left-0 right-0 z-50 -mt-3 rounded-b-xl border-2 bg-white dark:border-chinese-silver dark:bg-dark-charcoal shadow-lg">
{options.map((option, index) => (
<div
key={index}
className="flex cursor-pointer items-center justify-between hover:bg-gray-100 dark:hover:bg-purple-taupe"
>
<span
onClick={() => {
onSelect(option);
setIsOpen(false);
}}
className="ml-2 flex-1 overflow-hidden overflow-ellipsis whitespace-nowrap py-3 dark:text-light-gray"
>
{option}
</span>
{showDelete && onDelete && (
<button onClick={() => onDelete(option)} className="p-2">
{/* Icon or text for delete button */}
Delete
</button>
)}
</div>
))}
</div>
)}
</div>
);
}
type AddPromptModalProps = {
newPromptName: string;
onNewPromptNameChange: (name: string) => void;
onAddPrompt: () => void;
onClose: () => void;
};
const AddPromptModal: React.FC<AddPromptModalProps> = ({
newPromptName,
onNewPromptNameChange,
onAddPrompt,
onClose,
}) => {
return (
<div className="fixed top-0 left-0 flex h-screen w-screen items-center justify-center bg-gray-900 bg-opacity-50">
<div className="rounded-3xl bg-white p-4">
<p className="mb-2 text-2xl font-bold text-jet">Add New Prompt</p>
<input
type="text"
placeholder="Enter Prompt Name"
value={newPromptName}
onChange={(e) => onNewPromptNameChange(e.target.value)}
className="mb-4 w-full rounded-3xl border-2 dark:border-chinese-silver p-2"
/>
<button
onClick={onAddPrompt}
className="rounded-3xl bg-purple-300 px-4 py-2 font-bold text-white transition-all hover:bg-purple-600"
>
Save
</button>
<button
onClick={onClose}
className="mt-4 rounded-3xl px-4 py-2 font-bold text-red-500"
>
Cancel
</button>
</div>
</div>
);
};
type DocumentsProps = {
documents: Doc[] | null;
handleDeleteDocument: (index: number, document: Doc) => void;
};
const Documents: React.FC<DocumentsProps> = ({
documents,
handleDeleteDocument,
}) => {
return (
<div className="mt-8">
<div className="flex flex-col">
{/* <h2 className="text-xl font-semibold">Documents</h2> */}
<div className="mt-[27px] overflow-x-auto border dark:border-chinese-silver rounded-xl w-max">
<table className="block w-full table-auto content-center justify-center text-center dark:text-bright-gray" >
<thead>
<tr>
<th className="border-r p-4 md:w-[244px]">Document Name</th>
<th className="border-r px-4 py-2 w-[244px]">Vector Date</th>
<th className="border-r px-4 py-2 w-[244px]">Type</th>
<th className="px-4 py-2"></th>
</tr>
</thead>
<tbody>
{documents &&
documents.map((document, index) => (
<tr key={index}>
<td className="border-r border-t px-4 py-2">{document.name}</td>
<td className="border-r border-t px-4 py-2">{document.date}</td>
<td className="border-r border-t px-4 py-2">
{document.location === 'remote'
? 'Pre-loaded'
: 'Private'}
</td>
<td className="border-t px-4 py-2">
{document.location !== 'remote' && (
<img
src={Trash}
alt="Delete"
className="h-4 w-4 cursor-pointer hover:opacity-50"
id={`img-${index}`}
onClick={(event) => {
event.stopPropagation();
handleDeleteDocument(index, document);
}}
/>
)}
</td>
</tr>
))}
</tbody>
</table>
</div>
{/* <button
onClick={toggleAddDocumentModal}
className="mt-10 w-32 rounded-lg bg-purple-300 px-4 py-2 font-bold text-white transition-all hover:bg-purple-600"
>
Add New
</button> */}
</div>
{/* {isAddDocumentModalOpen && (
<AddDocumentModal
newDocument={newDocument}
onNewDocumentChange={setNewDocument}
onAddDocument={addDocument}
onClose={toggleAddDocumentModal}
/>
)} */}
</div>
);
};
type Document = {
name: string;
vectorDate: string;
vectorLocation: string;
};
// Modal for adding a new document
type AddDocumentModalProps = {
newDocument: Document;
onNewDocumentChange: (document: Document) => void;
onAddDocument: () => void;
onClose: () => void;
};
const AddDocumentModal: React.FC<AddDocumentModalProps> = ({
newDocument,
onNewDocumentChange,
onAddDocument,
onClose,
}) => {
return (
<div className="fixed top-0 left-0 flex h-screen w-screen items-center justify-center bg-gray-900 bg-opacity-50">
<div className="w-[50%] rounded-lg bg-white p-4">
<p className="mb-2 text-2xl font-bold text-jet">Add New Document</p>
<input
type="text"
placeholder="Document Name"
value={newDocument.name}
onChange={(e) =>
onNewDocumentChange({ ...newDocument, name: e.target.value })
}
className="mb-4 w-full rounded-lg border-2 p-2"
/>
<input
type="text"
placeholder="Vector Date"
value={newDocument.vectorDate}
onChange={(e) =>
onNewDocumentChange({ ...newDocument, vectorDate: e.target.value })
}
className="mb-4 w-full rounded-lg border-2 p-2"
/>
<input
type="text"
placeholder="Vector Location"
value={newDocument.vectorLocation}
onChange={(e) =>
onNewDocumentChange({
...newDocument,
vectorLocation: e.target.value,
})
}
className="mb-4 w-full rounded-lg border-2 p-2"
/>
<button
onClick={onAddDocument}
className="rounded-lg bg-purple-300 px-4 py-2 font-bold text-white transition-all hover:bg-purple-600"
>
Save
</button>
<button
onClick={onClose}
className="mt-4 rounded-lg px-4 py-2 font-bold text-red-500"
>
Cancel
</button>
</div>
</div>
);
};
const Widgets: React.FC<{
widgetScreenshot: File | null;
onWidgetScreenshotChange: (screenshot: File | null) => void;
}> = ({ widgetScreenshot, onWidgetScreenshotChange }) => {
const widgetSources = ['Source 1', 'Source 2', 'Source 3'];
const widgetMethods = ['Method 1', 'Method 2', 'Method 3'];
const widgetTypes = ['Type 1', 'Type 2', 'Type 3'];
const [selectedWidgetSource, setSelectedWidgetSource] = useState(
widgetSources[0],
);
const [selectedWidgetMethod, setSelectedWidgetMethod] = useState(
widgetMethods[0],
);
const [selectedWidgetType, setSelectedWidgetType] = useState(widgetTypes[0]);
// const [widgetScreenshot, setWidgetScreenshot] = useState<File | null>(null);
const [widgetCode, setWidgetCode] = useState<string>(''); // Your widget code state
const handleScreenshotChange = (
event: React.ChangeEvent<HTMLInputElement>,
) => {
const files = event.target.files;
if (files && files.length > 0) {
const selectedScreenshot = files[0];
onWidgetScreenshotChange(selectedScreenshot); // Update the screenshot in the parent component
}
};
const handleCopyToClipboard = () => {
// Create a new textarea element to select the text
const textArea = document.createElement('textarea');
textArea.value = widgetCode;
document.body.appendChild(textArea);
// Select and copy the text
textArea.select();
document.execCommand('copy');
// Clean up the textarea element
document.body.removeChild(textArea);
};
return (
<div>
<div className="mt-[59px]">
<p className="font-bold text-jet">Widget Source</p>
<Dropdown
options={widgetSources}
selectedValue={selectedWidgetSource}
onSelect={setSelectedWidgetSource}
/>
</div>
<div className="mt-5">
<p className="font-bold text-jet">Widget Method</p>
<Dropdown
options={widgetMethods}
selectedValue={selectedWidgetMethod}
onSelect={setSelectedWidgetMethod}
/>
</div>
<div className="mt-5">
<p className="font-bold text-jet">Widget Type</p>
<Dropdown
options={widgetTypes}
selectedValue={selectedWidgetType}
onSelect={setSelectedWidgetType}
/>
</div>
<div className="mt-6">
<p className="font-bold text-jet">Widget Code Snippet</p>
<textarea
rows={4}
value={widgetCode}
onChange={(e) => setWidgetCode(e.target.value)}
className="mt-3 w-full rounded-lg border-2 p-2"
/>
</div>
<div className="mt-1">
<button
onClick={handleCopyToClipboard}
className="rounded-lg bg-blue-400 px-2 py-2 font-bold text-white transition-all hover:bg-blue-600"
>
Copy
</button>
</div>
<div className="mt-4">
<p className="text-lg font-semibold">Widget Screenshot</p>
<input type="file" accept="image/*" onChange={handleScreenshotChange} />
</div>
{widgetScreenshot && (
<div className="mt-4">
<img
src={URL.createObjectURL(widgetScreenshot)}
alt="Widget Screenshot"
className="max-w-full rounded-lg border border-gray-300"
/>
</div>
)}
</div>
);
};

@ -0,0 +1,10 @@
<svg width="17" height="17" viewBox="0 0 17 17" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_2494_3464)">
<path d="M7.43717 14.1667C7.43717 14.4485 7.54912 14.7187 7.74837 14.918C7.94763 15.1172 8.21788 15.2292 8.49967 15.2292C8.78147 15.2292 9.05172 15.1172 9.25098 14.918C9.45023 14.7187 9.56217 14.4485 9.56217 14.1667V9.56249H14.1663C14.4481 9.56249 14.7184 9.45055 14.9176 9.2513C15.1169 9.05204 15.2288 8.78179 15.2288 8.49999C15.2288 8.2182 15.1169 7.94795 14.9176 7.74869C14.7184 7.54944 14.4481 7.43749 14.1663 7.43749H9.56217V2.83333C9.56217 2.55154 9.45023 2.28128 9.25098 2.08203C9.05172 1.88277 8.78147 1.77083 8.49967 1.77083C8.21788 1.77083 7.94763 1.88277 7.74837 2.08203C7.54912 2.28128 7.43717 2.55154 7.43717 2.83333V7.43749H2.83301C2.55122 7.43749 2.28096 7.54944 2.08171 7.74869C1.88245 7.94795 1.77051 8.2182 1.77051 8.49999C1.77051 8.78179 1.88245 9.05204 2.08171 9.2513C2.28096 9.45055 2.55122 9.56249 2.83301 9.56249H7.43717V14.1667Z" fill="#5D5D5D" />
</g>
<defs>
<clipPath id="clip0_2494_3464">
<rect width="17" height="17" fill="white"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

@ -0,0 +1,3 @@
<svg width="13" height="23" viewBox="0 0 13 23" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12.3183 20.8523C12.4198 20.9597 12.4991 21.0861 12.5518 21.2242C12.6044 21.3622 12.6294 21.5093 12.6252 21.657C12.621 21.8047 12.5878 21.9502 12.5274 22.0851C12.467 22.2199 12.3807 22.3416 12.2733 22.4431C12.1659 22.5446 12.0395 22.6239 11.9014 22.6766C11.7634 22.7293 11.6163 22.7542 11.4686 22.75C11.3208 22.7459 11.1754 22.7126 11.0405 22.6522C10.9057 22.5919 10.784 22.5055 10.6825 22.3981L1.12001 12.2731C0.922533 12.0642 0.812501 11.7877 0.812501 11.5002C0.812501 11.2128 0.922533 10.9362 1.12001 10.7273L10.6825 0.601207C10.7833 0.49145 10.905 0.40282 11.0403 0.340468C11.1757 0.278115 11.3221 0.243281 11.4711 0.23799C11.62 0.232697 11.7685 0.257054 11.908 0.309644C12.0474 0.362231 12.175 0.442004 12.2834 0.544328C12.3917 0.646653 12.4787 0.769488 12.5392 0.9057C12.5997 1.04191 12.6325 1.18878 12.6357 1.33779C12.639 1.48679 12.6126 1.63495 12.5581 1.77367C12.5036 1.91238 12.4221 2.03889 12.3183 2.14583L3.48476 11.5002L12.3183 20.8523Z" fill="#7D54D1"/>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

@ -0,0 +1,3 @@
<svg width="13" height="23" viewBox="0 0 13 23" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0.681741 20.8523C0.580245 20.9597 0.500899 21.0861 0.448232 21.2242C0.395564 21.3622 0.370607 21.5093 0.374785 21.657C0.378964 21.8047 0.412197 21.9502 0.472586 22.0851C0.532974 22.2199 0.619336 22.3416 0.726741 22.4431C0.834146 22.5446 0.96049 22.6239 1.09856 22.6766C1.23663 22.7293 1.38372 22.7542 1.53144 22.75C1.67915 22.7459 1.8246 22.7126 1.95947 22.6522C2.09434 22.5919 2.216 22.5055 2.31749 22.3981L11.88 12.2731C12.0775 12.0642 12.1875 11.7877 12.1875 11.5002C12.1875 11.2128 12.0775 10.9362 11.88 10.7273L2.31749 0.601207C2.21666 0.49145 2.09503 0.40282 1.95967 0.340468C1.8243 0.278115 1.67789 0.243281 1.52895 0.23799C1.38 0.232697 1.23149 0.257054 1.09204 0.309644C0.95259 0.362231 0.824978 0.442004 0.716618 0.544328C0.608256 0.646653 0.521307 0.769488 0.46082 0.9057C0.400333 1.04191 0.367515 1.18878 0.364269 1.33779C0.361025 1.48679 0.387418 1.63495 0.441917 1.77367C0.496417 1.91238 0.577936 2.03889 0.681739 2.14583L9.51524 11.5002L0.681741 20.8523Z" fill="#7D54D1"/>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 1.0 KiB

@ -1,3 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="11" viewBox="0 0 14 11" fill="none">
<path d="M4.95919 10.1906C4.84318 10.1902 4.72847 10.166 4.62222 10.1194C4.51596 10.0729 4.42041 10.0049 4.34152 9.91985L0.229353 5.54538C0.0756344 5.38157 -0.00671208 5.1634 0.000428491 4.93886C0.00756906 4.71433 0.103612 4.50183 0.267428 4.34812C0.431245 4.1944 0.649417 4.11205 0.873948 4.11919C1.09848 4.12633 1.31098 4.22238 1.4647 4.38619L4.95073 8.10068L12.0666 0.316329C12.1389 0.226405 12.2287 0.152193 12.3306 0.0982513C12.4326 0.0443098 12.5445 0.0117775 12.6594 0.00265255C12.7744 -0.00647237 12.89 0.00800286 12.9992 0.045189C13.1084 0.082375 13.2088 0.141487 13.2943 0.218894C13.3798 0.296301 13.4485 0.390369 13.4964 0.49532C13.5442 0.600272 13.57 0.713891 13.5723 0.829198C13.5746 0.944506 13.5534 1.05907 13.5098 1.16585C13.4662 1.27263 13.4012 1.36937 13.3189 1.45014L5.58533 9.91139C5.50718 9.998 5.41197 10.0675 5.30567 10.1156C5.19938 10.1636 5.0843 10.1892 4.96766 10.1906H4.95919Z" fill="#747474"/>
</svg>
<svg width="16px" height="16px" viewBox="0 0 1024 1024" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg" fill="#11ee1c" stroke="#11ee1c" stroke-width="83.96799999999999"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"><path d="M866.133333 258.133333L362.666667 761.6l-204.8-204.8L98.133333 618.666667 362.666667 881.066667l563.2-563.2z" fill="#11ee1c"></path></g></svg>

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 490 B

@ -1,3 +1,3 @@
<svg width="14" height="17" stroke-width="1.15" viewBox="0 0 14 17" >
<svg width="13" height="16" stroke-width="1.15" viewBox="0 0 14 17" >
<path d="M13.8013 5.01282L8.80645 0.191795C8.67953 0.0691399 8.50734 0.000152609 8.32774 0H6.09677C5.43801 0 4.80623 0.252586 4.34041 0.702193C3.8746 1.1518 3.6129 1.7616 3.6129 2.39744V3.48718H2.48387C1.82511 3.48718 1.19332 3.73977 0.727509 4.18937C0.261693 4.63898 0 5.24878 0 5.88462V14.6026C0 15.2384 0.261693 15.8482 0.727509 16.2978C1.19332 16.7474 1.82511 17 2.48387 17H8.80645C9.46521 17 10.097 16.7474 10.5628 16.2978C11.0286 15.8482 11.2903 15.2384 11.2903 14.6026V13.5128H11.5161C12.1749 13.5128 12.8067 13.2602 13.2725 12.8106C13.7383 12.361 14 11.7512 14 11.1154V5.44872C13.9929 5.28447 13.9219 5.12884 13.8013 5.01282ZM9.03226 2.23179L11.6877 4.79487H9.03226V2.23179ZM9.93548 14.6026C9.93548 14.8916 9.81653 15.1688 9.6048 15.3731C9.39306 15.5775 9.10589 15.6923 8.80645 15.6923H2.48387C2.18443 15.6923 1.89726 15.5775 1.68552 15.3731C1.47379 15.1688 1.35484 14.8916 1.35484 14.6026V5.88462C1.35484 5.5956 1.47379 5.31842 1.68552 5.11405C1.89726 4.90968 2.18443 4.79487 2.48387 4.79487H3.6129V11.1154C3.6129 11.7512 3.8746 12.361 4.34041 12.8106C4.80623 13.2602 5.43801 13.5128 6.09677 13.5128H9.93548V14.6026ZM11.5161 12.2051H6.09677C5.79734 12.2051 5.51016 12.0903 5.29843 11.886C5.08669 11.6816 4.96774 11.4044 4.96774 11.1154V2.39744C4.96774 2.10842 5.08669 1.83124 5.29843 1.62687C5.51016 1.4225 5.79734 1.30769 6.09677 1.30769H7.67742V5.44872C7.67976 5.62143 7.75188 5.78643 7.87842 5.90856C8.00496 6.03069 8.1759 6.10031 8.35484 6.10256H12.6452V11.1154C12.6452 11.4044 12.5262 11.6816 12.3145 11.886C12.1027 12.0903 11.8156 12.2051 11.5161 12.2051Z" fill="#949494"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.6 KiB

After

Width:  |  Height:  |  Size: 1.6 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 256 KiB

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="22" height="18" viewBox="0 0 22 18" fill="none">
<path d="M18.7359 2.29769C17.4059 1.66546 15.9659 1.20658 14.4659 0.941453C14.4528 0.941024 14.4397 0.943541 14.4276 0.948827C14.4155 0.954112 14.4047 0.962038 14.3959 0.972045C14.2159 1.30856 14.0059 1.74704 13.8659 2.08355C12.2749 1.83882 10.6569 1.83882 9.06594 2.08355C8.92594 1.73684 8.71594 1.30856 8.52594 0.972045C8.51594 0.951651 8.48594 0.941453 8.45594 0.941453C6.95594 1.20658 5.52594 1.66546 4.18594 2.29769C4.17594 2.29769 4.16594 2.30789 4.15594 2.31809C1.43594 6.46839 0.685937 10.5065 1.05594 14.5039C1.05594 14.5243 1.06594 14.5447 1.08594 14.5548C2.88594 15.9009 4.61594 16.7167 6.32594 17.2571C6.35594 17.2673 6.38594 17.2571 6.39594 17.2367C6.79594 16.6759 7.15594 16.0844 7.46594 15.4624C7.48594 15.4216 7.46594 15.3808 7.42594 15.3706C6.85594 15.1463 6.31594 14.8812 5.78594 14.5752C5.74594 14.5549 5.74594 14.4937 5.77594 14.4631C5.88594 14.3815 5.99594 14.2897 6.10594 14.2081C6.12594 14.1877 6.15594 14.1877 6.17594 14.1979C9.61594 15.7989 13.3259 15.7989 16.7259 14.1979C16.7459 14.1877 16.7759 14.1877 16.7959 14.2081C16.9059 14.2999 17.0159 14.3815 17.1259 14.4733C17.1659 14.5039 17.1659 14.565 17.1159 14.5854C16.5959 14.9016 16.0459 15.1565 15.4759 15.3808C15.4359 15.391 15.4259 15.442 15.4359 15.4726C15.7559 16.0946 16.1159 16.6861 16.5059 17.2469C16.5359 17.2571 16.5659 17.2673 16.5959 17.2571C18.3159 16.7167 20.0459 15.9009 21.8459 14.5548C21.8659 14.5447 21.8759 14.5243 21.8759 14.5039C22.3159 9.88449 21.1459 5.87695 18.7759 2.31809C18.7659 2.30789 18.7559 2.29769 18.7359 2.29769ZM7.98594 12.0667C6.95594 12.0667 6.09594 11.098 6.09594 9.90488C6.09594 8.7118 6.93594 7.74305 7.98594 7.74305C9.04594 7.74305 9.88594 8.72199 9.87594 9.90488C9.87594 11.098 9.03594 12.0667 7.98594 12.0667ZM14.9559 12.0667C13.9259 12.0667 13.0659 11.098 13.0659 9.90488C13.0659 8.7118 13.9059 7.74305 14.9559 7.74305C16.0159 7.74305 16.8559 8.72199 16.8459 9.90488C16.8459 11.098 16.0159 12.0667 14.9559 12.0667Z" fill="#949494"/>
</svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

@ -1,4 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="current" xmlns="http://www.w3.org/2000/svg">
<path d="M6.37776 10.1001V12.9C6.37776 13.457 6.599 13.9911 6.99282 14.3849C7.38664 14.7788 7.92077 15 8.47772 15L11.2777 8.70011V1.00025H3.38181C3.04419 0.996436 2.71656 1.11477 2.45929 1.33344C2.20203 1.55212 2.03246 1.8564 1.98184 2.19023L1.01585 8.49012C0.985398 8.69076 0.998931 8.89563 1.05551 9.09053C1.1121 9.28543 1.21038 9.46569 1.34355 9.61884C1.47671 9.77198 1.64159 9.89434 1.82674 9.97744C2.01189 10.0605 2.2129 10.1024 2.41583 10.1001H6.37776ZM11.2777 1.00025H13.1466C13.5428 0.993247 13.9277 1.13195 14.2284 1.39002C14.5291 1.64809 14.7245 2.00758 14.7776 2.40023V7.30014C14.7245 7.69279 14.5291 8.05227 14.2284 8.31035C13.9277 8.56842 13.5428 8.70712 13.1466 8.70011H11.2777" fill="white"/>
<svg width="14" height="14" viewBox="0 0 16 16" fill="current" xmlns="http://www.w3.org/2000/svg">
<path d="M6.37776 10.1001V12.9C6.37776 13.457 6.599 13.9911 6.99282 14.3849C7.38664 14.7788 7.92077 15 8.47772 15L11.2777 8.70011V1.00025H3.38181C3.04419 0.996436 2.71656 1.11477 2.45929 1.33344C2.20203 1.55212 2.03246 1.8564 1.98184 2.19023L1.01585 8.49012C0.985398 8.69076 0.998931 8.89563 1.05551 9.09053C1.1121 9.28543 1.21038 9.46569 1.34355 9.61884C1.47671 9.77198 1.64159 9.89434 1.82674 9.97744C2.01189 10.0605 2.2129 10.1024 2.41583 10.1001H6.37776ZM11.2777 1.00025H13.1466C13.5428 0.993247 13.9277 1.13195 14.2284 1.39002C14.5291 1.64809 14.7245 2.00758 14.7776 2.40023V7.30014C14.7245 7.69279 14.5291 8.05227 14.2284 8.31035C13.9277 8.56842 13.5428 8.70712 13.1466 8.70011H11.2777" fill="none"/>
<path d="M11.2777 8.70011L8.47772 15C7.92077 15 7.38664 14.7788 6.99282 14.3849C6.599 13.9911 6.37776 13.457 6.37776 12.9V10.1001H2.41583C2.2129 10.1024 2.01189 10.0605 1.82674 9.97744C1.64159 9.89434 1.47671 9.77198 1.34355 9.61884C1.21038 9.46569 1.1121 9.28543 1.05551 9.09053C0.998931 8.89563 0.985398 8.69076 1.01585 8.49012L1.98184 2.19023C2.03246 1.8564 2.20203 1.55212 2.45929 1.33344C2.71656 1.11477 3.04419 0.996436 3.38181 1.00025H11.2777M11.2777 8.70011V1.00025M11.2777 8.70011H13.1466C13.5428 8.70712 13.9277 8.56842 14.2284 8.31035C14.5291 8.05227 14.7245 7.69279 14.7776 7.30014V2.40023C14.7245 2.00758 14.5291 1.64809 14.2284 1.39002C13.9277 1.13195 13.5428 0.993247 13.1466 1.00025H11.2777" stroke="current" stroke-width="1.4" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.6 KiB

After

Width:  |  Height:  |  Size: 1.6 KiB

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" x="0px" y="0px" width="100" height="100" viewBox="0,0,256,256">
<g transform="translate(-19.2,-19.2) scale(1.15,1.15)"><g fill="#949494" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><g transform="scale(10.66667,10.66667)"><path d="M13.172,2h-7.172c-1.1,0 -2,0.9 -2,2v16c0,1.1 0.9,2 2,2h12c1.1,0 2,-0.9 2,-2v-11.172c0,-0.53 -0.211,-1.039 -0.586,-1.414l-4.828,-4.828c-0.375,-0.375 -0.884,-0.586 -1.414,-0.586zM15,18h-6c-0.552,0 -1,-0.448 -1,-1v0c0,-0.552 0.448,-1 1,-1h6c0.552,0 1,0.448 1,1v0c0,0.552 -0.448,1 -1,1zM15,14h-6c-0.552,0 -1,-0.448 -1,-1v0c0,-0.552 0.448,-1 1,-1h6c0.552,0 1,0.448 1,1v0c0,0.552 -0.448,1 -1,1zM13,9v-5.5l5.5,5.5z"></path></g></g></g>
</svg>

After

Width:  |  Height:  |  Size: 933 B

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" x="0px" y="0px" width="100" height="100" viewBox="0,0,256,256">
<g transform="translate(-19.2,-19.2) scale(1.15,1.15)"><g fill="black" fill-opacity="0.54" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><g transform="scale(10.66667,10.66667)"><path d="M13.172,2h-7.172c-1.1,0 -2,0.9 -2,2v16c0,1.1 0.9,2 2,2h12c1.1,0 2,-0.9 2,-2v-11.172c0,-0.53 -0.211,-1.039 -0.586,-1.414l-4.828,-4.828c-0.375,-0.375 -0.884,-0.586 -1.414,-0.586zM15,18h-6c-0.552,0 -1,-0.448 -1,-1v0c0,-0.552 0.448,-1 1,-1h6c0.552,0 1,0.448 1,1v0c0,0.552 -0.448,1 -1,1zM15,14h-6c-0.552,0 -1,-0.448 -1,-1v0c0,-0.552 0.448,-1 1,-1h6c0.552,0 1,0.448 1,1v0c0,0.552 -0.448,1 -1,1zM13,9v-5.5l5.5,5.5z"></path></g></g></g>
</svg>

After

Width:  |  Height:  |  Size: 951 B

@ -1,3 +1,3 @@
<svg width="10" height="5" viewBox="0 0 10 5" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0 0L5 5L10 0H0Z" fill="black" fill-opacity="0.54"/>
<path d="M0 0L5 5L10 0H0Z" fill="gray" fill-opacity="0.80"/>
</svg>

Before

Width:  |  Height:  |  Size: 163 B

After

Width:  |  Height:  |  Size: 162 B

@ -0,0 +1,4 @@
<svg width="27" height="26" viewBox="0 0 27 26" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4.03371 5.27275L4.1915 20.9162C4.20021 21.7802 4.90766 22.4735 5.77162 22.4648L21.4151 22.307C22.2791 22.2983 22.9724 21.5909 22.9637 20.7269L22.8059 5.0834C22.7972 4.21944 22.0897 3.52612 21.2258 3.53483L5.58228 3.69262C4.71831 3.70134 4.02499 4.40878 4.03371 5.27275Z" stroke="#949494" stroke-width="2.08591" stroke-linejoin="round"/>
<path d="M9.42289 22.428L9.23354 3.65585M17.6924 15.0436L15.5856 12.9788L17.6504 10.872M6.29419 22.4596L12.5516 22.3965M6.10484 3.68741L12.3622 3.62429" stroke="#949494" stroke-width="2.08591" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 692 B

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="21" height="22" viewBox="0 0 21 22" fill="none">
<path d="M19.8155 2.09139C19.0825 1.34392 18.2005 0.970703 17.1685 0.970703H4.68155C3.64955 0.970703 2.76755 1.34392 2.03455 2.09139C1.30155 2.83885 0.935547 3.73825 0.935547 4.79061V17.524C0.935547 18.5763 1.30155 19.4757 2.03455 20.2232C2.76755 20.9707 3.64955 21.3439 4.68155 21.3439H7.59555C7.78555 21.3439 7.92855 21.3368 8.02455 21.3235C8.13624 21.3007 8.23705 21.2399 8.31055 21.1512C8.40555 21.0492 8.45355 20.9013 8.45355 20.7076L8.44655 19.8051C8.44255 19.23 8.44055 18.7752 8.44055 18.4387L8.14055 18.4917C7.95055 18.5274 7.71055 18.5427 7.41955 18.5386C7.11624 18.5329 6.81389 18.5019 6.51555 18.4458C6.19795 18.386 5.89897 18.2497 5.64355 18.0481C5.37604 17.8418 5.17652 17.5572 5.07155 17.2323L4.94155 16.9264C4.83198 16.6851 4.69432 16.4581 4.53155 16.2503C4.34555 16.0025 4.15655 15.8353 3.96555 15.7466L3.87555 15.6803C3.81279 15.6345 3.75571 15.5811 3.70555 15.5212C3.65764 15.4657 3.6182 15.4031 3.58855 15.3356C3.56255 15.2734 3.58455 15.2224 3.65355 15.1827C3.72355 15.1419 3.84855 15.1225 4.03155 15.1225L4.29155 15.1633C4.46455 15.198 4.67955 15.304 4.93455 15.4804C5.19261 15.6598 5.40818 15.8957 5.56555 16.1708C5.76555 16.5328 6.00555 16.8091 6.28755 16.9998C6.56955 17.1895 6.85355 17.2854 7.13955 17.2854C7.42555 17.2854 7.67255 17.2629 7.88155 17.2191C8.08366 17.1765 8.28005 17.1094 8.46655 17.0192C8.54455 16.4278 8.75655 15.9709 9.10355 15.6528C8.65392 15.6078 8.2083 15.5281 7.77055 15.4142C7.34334 15.2945 6.93249 15.1208 6.54755 14.8972C6.14479 14.6736 5.78905 14.3714 5.50055 14.008C5.22355 13.6541 4.99555 13.1901 4.81755 12.616C4.64055 12.0409 4.55155 11.377 4.55155 10.6255C4.55155 9.55581 4.89355 8.64519 5.57855 7.89263C5.25855 7.08908 5.28855 6.18662 5.66955 5.18831C5.92155 5.10775 6.29455 5.16791 6.78855 5.36676C7.28255 5.56561 7.64455 5.7359 7.87455 5.87662C8.10455 6.01939 8.28855 6.13869 8.42755 6.23557C9.24052 6.00486 10.0807 5.88889 10.9245 5.8909C11.7835 5.8909 12.6155 6.00613 13.4225 6.23557L13.9165 5.91741C14.2965 5.68476 14.6973 5.48946 15.1135 5.33413C15.5735 5.15669 15.9235 5.10877 16.1675 5.18831C16.5575 6.18764 16.5915 7.08908 16.2705 7.89365C16.9555 8.64519 17.2985 9.55581 17.2985 10.6265C17.2985 11.3781 17.2095 12.044 17.0315 12.6221C16.8545 13.2013 16.6245 13.6653 16.3425 14.0151C16.049 14.3739 15.6917 14.6731 15.2895 14.8972C14.8695 15.1358 14.4615 15.3081 14.0665 15.4142C13.6288 15.5284 13.1832 15.6085 12.7335 15.6538C13.1835 16.0515 13.4095 16.6786 13.4095 17.5362V20.7076C13.4095 20.8575 13.4305 20.9788 13.4745 21.0716C13.4948 21.1163 13.5236 21.1564 13.5593 21.1895C13.5951 21.2226 13.637 21.2481 13.6825 21.2643C13.7785 21.299 13.8625 21.3215 13.9365 21.3296C14.0105 21.3398 14.1165 21.3429 14.2545 21.3429H17.1685C18.2005 21.3429 19.0825 20.9696 19.8155 20.2222C20.5475 19.4757 20.9145 18.5753 20.9145 17.523V4.79061C20.9145 3.73825 20.5485 2.83885 19.8155 2.09139Z" fill="#949494"/>
</svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

@ -0,0 +1,3 @@
<svg width="600" height="450" viewBox="0 0 600 450" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M25 25H575M25 225H575M25 425H575" stroke="#949494" stroke-opacity="0.54" stroke-width="50" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 256 B

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="21" height="21" viewBox="0 0 21 21" fill="none">
<path d="M10.9355 0.442139C5.41555 0.442139 0.935547 5.01053 0.935547 10.6394C0.935547 16.2683 5.41555 20.8367 10.9355 20.8367C16.4555 20.8367 20.9355 16.2683 20.9355 10.6394C20.9355 5.01053 16.4555 0.442139 10.9355 0.442139ZM11.9355 15.7381H9.93555V9.61971H11.9355V15.7381ZM11.9355 7.58025H9.93555V5.54079H11.9355V7.58025Z" fill="#949494"/>
</svg>

After

Width:  |  Height:  |  Size: 446 B

@ -1,4 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="current" xmlns="http://www.w3.org/2000/svg">
<path d="M9.39995 5.89997V3.09999C9.39995 2.54304 9.1787 2.0089 8.78487 1.61507C8.39105 1.22125 7.85691 1 7.29996 1L4.49998 7.29996V14.9999H12.3959C12.7336 15.0037 13.0612 14.8854 13.3185 14.6667C13.5757 14.448 13.7453 14.1437 13.7959 13.8099L14.7619 7.50996C14.7924 7.30931 14.7788 7.10444 14.7222 6.90954C14.6657 6.71464 14.5674 6.53437 14.4342 6.38123C14.301 6.22808 14.1362 6.10572 13.951 6.02262C13.7659 5.93952 13.5649 5.89767 13.3619 5.89997H9.39995ZM4.49998 14.9999H2.39999C2.02869 14.9999 1.6726 14.8524 1.41005 14.5899C1.1475 14.3273 1 13.9712 1 13.5999V8.69995C1 8.32865 1.1475 7.97256 1.41005 7.71001C1.6726 7.44746 2.02869 7.29996 2.39999 7.29996H4.49998" fill="white"/>
<svg width="14" height="14" viewBox="0 0 16 16" fill="current" xmlns="http://www.w3.org/2000/svg">
<path d="M9.39995 5.89997V3.09999C9.39995 2.54304 9.1787 2.0089 8.78487 1.61507C8.39105 1.22125 7.85691 1 7.29996 1L4.49998 7.29996V14.9999H12.3959C12.7336 15.0037 13.0612 14.8854 13.3185 14.6667C13.5757 14.448 13.7453 14.1437 13.7959 13.8099L14.7619 7.50996C14.7924 7.30931 14.7788 7.10444 14.7222 6.90954C14.6657 6.71464 14.5674 6.53437 14.4342 6.38123C14.301 6.22808 14.1362 6.10572 13.951 6.02262C13.7659 5.93952 13.5649 5.89767 13.3619 5.89997H9.39995ZM4.49998 14.9999H2.39999C2.02869 14.9999 1.6726 14.8524 1.41005 14.5899C1.1475 14.3273 1 13.9712 1 13.5999V8.69995C1 8.32865 1.1475 7.97256 1.41005 7.71001C1.6726 7.44746 2.02869 7.29996 2.39999 7.29996H4.49998" fill="none"/>
<path d="M4.49998 7.29996L7.29996 1C7.85691 1 8.39105 1.22125 8.78487 1.61507C9.1787 2.0089 9.39995 2.54304 9.39995 3.09999V5.89997H13.3619C13.5649 5.89767 13.7659 5.93952 13.951 6.02262C14.1362 6.10572 14.301 6.22808 14.4342 6.38123C14.5674 6.53437 14.6657 6.71464 14.7223 6.90954C14.7788 7.10444 14.7924 7.30931 14.7619 7.50996L13.7959 13.8099C13.7453 14.1437 13.5757 14.448 13.3185 14.6667C13.0612 14.8854 12.7336 15.0037 12.3959 14.9999H4.49998M4.49998 7.29996V14.9999M4.49998 7.29996H2.39999C2.02869 7.29996 1.6726 7.44746 1.41005 7.71001C1.1475 7.97256 1 8.32865 1 8.69995V13.5999C1 13.9712 1.1475 14.3273 1.41005 14.5899C1.6726 14.8524 2.02869 14.9999 2.39999 14.9999H4.49998" stroke="current" stroke-width="1.39999" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB

After

Width:  |  Height:  |  Size: 1.5 KiB

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none">
<path d="M20 4H4C2.9 4 2 4.9 2 6V24L6 20H20C21.1 20 22 19.1 22 18V6C22 4.9 21.1 4 20 4ZM20 18H6L4 20V6H20V18Z" fill="#949494"/>
</svg>

After

Width:  |  Height:  |  Size: 232 B

@ -1,3 +1,3 @@
<svg width="20" height="20" viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M18 0H2C0.9 0 0 0.9 0 2V20L4 16H18C19.1 16 20 15.1 20 14V2C20 0.9 19.1 0 18 0ZM18 14H4L2 16V2H18V14Z" fill="black" fill-opacity="0.54"/>
<path d="M18 0H2C0.9 0 0 0.9 0 2V20L4 16H18C19.1 16 20 15.1 20 14V2C20 0.9 19.1 0 18 0ZM18 14H4L2 16V2H18V14Z" fill="gray" fill-opacity="0.80"/>
</svg>

Before

Width:  |  Height:  |  Size: 249 B

After

Width:  |  Height:  |  Size: 248 B

@ -0,0 +1,3 @@
<svg width="21" height="18" viewBox="0 0 21 18" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0.00999999 18L21 9L0.00999999 0L0 7L15 9L0 11L0.00999999 18Z" fill="#949494"/>
</svg>

After

Width:  |  Height:  |  Size: 192 B

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save