Merge branch 'main' into New-prompt-

This commit is contained in:
Alex 2023-10-22 11:16:03 -04:00 committed by GitHub
commit 9303746d80
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
41 changed files with 983 additions and 426 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

View File

@ -1,41 +1,48 @@
# Welcome to DocsGPT Contributing Guidelines
Thank you for choosing this project to contribute to. We are all very grateful!
Thank you for choosing to contribute to DocsGPT! We are all very grateful!
### [🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
# We accept different types of contributions
📣 **Discussions** - where you can start a new topic or answer some questions
📣 **Discussions** - Engage in conversations, start new topics, or help answer questions.
🐞 **Issues** - This is how we track tasks, sometimes it is bugs that need fixing, and sometimes it is new features
🐞 **Issues** - This is where we keep track of tasks. It could be bugs,fixes or suggestions for new features.
🛠️ **Pull requests** - This is how you can suggest changes to our repository, to work on existing issues or add new features
🛠️ **Pull requests** - Suggest changes to our repository, either by working on existing issues or adding new features.
📚 **Wiki** - where we have our documentation
📚 **Wiki** - This is where our documentation resides.
## 🐞 Issues and Pull requests
We value contributions to our issues in the form of discussion or suggestions. We recommend that you check out existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2).
We value contributions in the form of discussions or suggestions. We recommend taking a look at existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2).
If you want to contribute by writing code, there are a few things that you should know before doing it:
Before creating issues, please check out how the latest version of our app looks and works by launching it via [Quickstart](https://github.com/arc53/DocsGPT#quickstart) the version on our live demo is slightly modified with login. Your issues should relate to the version that you can launch via [Quickstart](https://github.com/arc53/DocsGPT#quickstart).
We have a frontend in React (Vite) and backend in Python.
### 👨‍💻 If you're interested in contributing code, here are some important things to know:
### If you are looking to contribute to frontend (⚛React, Vite):
Tech Stack Overview:
- The current frontend is being migrated from `/application` to `/frontend` with a new design, so please contribute to the new one.
- 🌐 Frontend: Built with React (Vite) ⚛️,
- 🖥 Backend: Developed in Python 🐍
### 🌐 If you are looking to contribute to frontend (⚛React, Vite):
- The current frontend is being migrated from [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) to [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) with a new design, so please contribute to the new one.
- Check out this [milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues.
- The Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
- The updated Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
Please try to follow the guidelines.
### If you are looking to contribute to Backend (🐍 Python):
- Check out our issues and contribute to `/application` or `/scripts` (ignore old `ingest_rst.py` `ingest_rst_sphinx.py` files; they will be deprecated soon).
- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
- Before submitting your PR, ensure it is queryable after ingesting some test data.
### 🖥 If you are looking to contribute to Backend (🐍 Python):
- Review our issues and contribute to [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts) (please disregard old [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py) files; they will be deprecated soon).
- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
- Before submitting your Pull Request, ensure it can be queried after ingesting some test data.
### Testing
To run unit tests from the root of the repository, execute:
@ -43,10 +50,62 @@ To run unit tests from the root of the repository, execute:
python -m pytest
```
### Workflow:
Create a fork, make changes on your forked repository, and submit changes as a pull request.
## Workflow 📈
Here's a step-by-step guide on how to contribute to DocsGPT:
1. **Fork the Repository:**
- Click the "Fork" button at the top-right of this repository to create your fork.
2. **Create and Switch to a New Branch:**
- Create a new branch for your contribution using:
```shell
git checkout -b your-branch-name
```
3. **Make Changes:**
- Make the required changes in your branch.
4. **Add Changes to the Staging Area:**
- Add your changes to the staging area using:
```shell
git add .
```
5. **Commit Your Changes:**
- Commit your changes with a descriptive commit message using:
```shell
git commit -m "Your descriptive commit message"
```
6. **Push Your Changes to the Remote Repository:**
- Push your branch with changes to your fork on GitHub using:
```shell
git push origin your-branch-name
```
7. **Submit a Pull Request (PR):**
- Create a Pull Request from your branch to the main repository. Make sure to include a detailed description of your changes and reference any related issues.
8. **Collaborate:**
- Be responsive to comments and feedback on your PR.
- Make necessary updates as suggested.
- Once your PR is approved, it will be merged into the main repository.
9. **Testing:**
- Before submitting a Pull Request, ensure your code passes all unit tests.
- To run unit tests from the root of the repository, execute:
```shell
python -m pytest
```
*Note: You should run the unit test only after making the changes to the backend code.*
10. **Questions and Collaboration:**
- Feel free to join our Discord. We're very friendly and welcoming to new contributors, so don't hesitate to reach out.
Thank you for considering contributing to DocsGPT! 🙏
## Questions/collaboration
Please join our [Discord](https://discord.gg/n5BX8dh8rU). Don't hesitate; we are very friendly and welcoming to new contributors.
# Thank you so much for considering contributing to DocsGPT!🙏
Feel free to join our [Discord](https://discord.gg/n5BX8dh8rU). We're very friendly and welcoming to new contributors, so don't hesitate to reach out.
# Thank you so much for considering to contribute DocsGPT!🙏

View File

@ -17,14 +17,14 @@ Familiarize yourself with the current contributions and our [Roadmap](https://gi
Deciding to contribute with code? Here are some insights based on the area of your interest:
- Frontend (⚛React, Vite):
- Most of the code is located in `/frontend` folder. You can also check out our React extension in /extensions/react-widget.
- Most of the code is located in [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) folder. You can also check out our React extension in [`/extensions/react-widget`](https://github.com/arc53/DocsGPT/tree/main/extensions/react-widget).
- For design references, here's the [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
- Ensure you adhere to the established guidelines.
- Backend (🐍Python):
- Focus on `/application` or `/scripts`. However, avoid the files ingest_rst.py and ingest_rst_sphinx.py, as they will soon be deprecated.
- Focus on [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts). However, avoid the files [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) and [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py), as they will soon be deprecated.
- Newly added code should come with relevant unit tests (pytest).
- Refer to the `/tests` folder for test suites.
- Refer to the [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder for test suites.
Check out our [Contributing Guidelines](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
@ -32,4 +32,10 @@ Once you have created your PR and our maintainers have merged it, please fill in
Feel free to join our Discord server. We're here to help newcomers, so don't hesitate to jump in! [Join us here](https://discord.gg/n5BX8dh8rU).
Thank you very much for considering contributing to DocsGPT during Hacktoberfest! 🙏 Your contributions could earn you a stylish new t-shirt as a token of our appreciation. 🎁 Join us, and let's code together! 🚀
Thank you very much for considering contributing to DocsGPT during Hacktoberfest! 🙏 Your contributions (not just simple typo) could earn you a stylish new t-shirt as a token of our appreciation. 🎁 Join us, and let's code together! 🚀
Here is a preview of the shirts:
<p align="center">
<img src="Assets/DocsGPT tee-front.jpeg" width="350" />
<img src="Assets/DocsGPT tee-back.jpeg" width="350" />
</p>

View File

@ -7,9 +7,9 @@
</p>
<p align="left">
<strong>DocsGPT</strong> is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
<strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> is a cutting-edge open-source solution that streamlines the process of finding information in the project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
Say goodbye to time-consuming manual searches, and let <strong>DocsGPT</strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.
Say goodbye to time-consuming manual searches, and let <strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.
</p>
<div align="center">
@ -21,58 +21,56 @@ Say goodbye to time-consuming manual searches, and let <strong>DocsGPT</strong>
</div>
### Production Support / Help for companies:
### Production Support / Help for companies:
We're eager to provide personalized assistance when deploying your DocsGPT to a live environment.
- [Get Support 👋](https://airtable.com/appdeaL0F1qV8Bl2C/shrrJF1Ll7btCJRbP)
- [Book Demo 👋](https://airtable.com/appdeaL0F1qV8Bl2C/shrrJF1Ll7btCJRbP)
- [Send Email ✉️](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions)
### [🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
![video-example-of-docs-gpt](https://d3dg1063dc54p9.cloudfront.net/videos/demov3.gif)
## Roadmap
You can find our roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
## Our Open-Source models optimized for DocsGPT:
| Name | Base Model | Requirements (or similar) |
|-------------------|------------|----------------------------------------------------------|
| [Docsgpt-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon) | Falcon-7b | 1xA10G gpu |
| [Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b) | llama-2-14b | 2xA10 gpu's |
| [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's |
| Name | Base Model | Requirements (or similar) |
| --------------------------------------------------------------------- | ----------- | ------------------------- |
| [Docsgpt-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon) | Falcon-7b | 1xA10G gpu |
| [Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b) | llama-2-14b | 2xA10 gpu's |
| [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's |
If you don't have enough resources to run it, you can use bitsnbytes to quantize.
## Features
![Group 9](https://user-images.githubusercontent.com/17906039/220427472-2644cff4-7666-46a5-819f-fc4a521f63c7.png)
## Useful links
[Live preview](https://docsgpt.arc53.com/)
[Join our Discord](https://discord.gg/n5BX8dh8rU)
[Guides](https://docs.docsgpt.co.uk/)
[Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
- 🔍🔥 [Live preview](https://docsgpt.arc53.com/)
[How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation)
- 💬🎉[Join our Discord](https://discord.gg/n5BX8dh8rU)
[How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM)
- 📚😎 [Guides](https://docs.docsgpt.co.uk/)
- 👩‍💻👨‍💻 [Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
- 🗂️🚀 [How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation)
- 🏠🔐 [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM)
## Project structure
- Application - Flask app (main application).
- Extensions - Chrome extension.
- Scripts - Script that creates similarity search index and stores for other libraries.
- Scripts - Script that creates similarity search index for other libraries.
- Frontend - Frontend uses Vite and React.
@ -89,15 +87,17 @@ It will install all the dependencies and allow you to download the local model o
Otherwise, refer to this Guide:
1. Download and open this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your OpenAI API key and `VITE_API_STREAMING` to true or false, depending on if you want streaming answers or not.
2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys) and `VITE_API_STREAMING` to true or false, depending on whether you want streaming answers or not.
It should look like this inside:
```
API_KEY=Yourkey
VITE_API_STREAMING=true
```
See optional environment variables in the `/.env-template` and `/application/.env_sample` files.
3. Run `./run-with-docker-compose.sh`.
See optional environment variables in the [/.env-template](https://github.com/arc53/DocsGPT/blob/main/.env-template) and [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) files.
3. Run [./run-with-docker-compose.sh](https://github.com/arc53/DocsGPT/blob/main/run-with-docker-compose.sh).
4. Navigate to http://localhost:5173/.
To stop, just run `Ctrl + C`.
@ -105,10 +105,12 @@ To stop, just run `Ctrl + C`.
## Development environments
### Spin up mongo and redis
For development, only two containers are used from `docker-compose.yaml` (by deleting all services except for Redis and Mongo).
For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo).
See file [docker-compose-dev.yaml](./docker-compose-dev.yaml).
Run
```
docker compose -f docker-compose-dev.yaml build
docker compose -f docker-compose-dev.yaml up -d
@ -119,44 +121,57 @@ docker compose -f docker-compose-dev.yaml up -d
Make sure you have Python 3.10 or 3.11 installed.
1. Export required environment variables or prepare a `.env` file in the `/application` folder:
- Copy `.env_sample` and create `.env` with your OpenAI API token for the `API_KEY` and `EMBEDDINGS_KEY` fields.
- Copy [.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) and create `.env` with your OpenAI API token for the `API_KEY` and `EMBEDDINGS_KEY` fields.
(check out [`application/core/settings.py`](application/core/settings.py) if you want to see more config options.)
2. (optional) Create a Python virtual environment:
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments .
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
a) On Mac OS and Linux
```commandline
python -m venv venv
. venv/bin/activate
```
b) On Windows
```commandline
python -m venv venv
venv/Scripts/activate
```
3. Change to the `application/` subdir and install dependencies for the backend:
3. Change to the `application/` subdir by the command `cd application/` and install dependencies for the backend:
```commandline
pip install -r application/requirements.txt
pip install -r requirements.txt
```
4. Run the app using `flask run --host=0.0.0.0 --port=7091`.
5. Start worker with `celery -A application.app.celery worker -l INFO`.
### Start frontend
### Start frontend
Make sure you have Node version 16 or higher.
1. Navigate to the `/frontend` folder.
2. Install dependencies by running `npm install`.
3. Run the app using `npm run dev`.
1. Navigate to the [/frontend](https://github.com/arc53/DocsGPT/tree/main/frontend) folder.
2. Install required packages `husky` and `vite` (ignore if installed).
```commandline
npm install husky -g
npm install vite -g
```
3. Install dependencies by running `npm install --include=dev`.
4. Run the app using `npm run dev`.
## Contributing
Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information about how to get involved. We welcome issues, questions, and pull requests.
Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information about how to get involved. We welcome issues, questions, and pull requests.
## Code Of Conduct
We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more information about contributing.
## Many Thanks To Our Contributors
@ -166,6 +181,7 @@ We as members, contributors, and leaders, pledge to make participation in our co
</a>
## License
The source code license is MIT, as described in the LICENSE file.
The source code license is [MIT](https://opensource.org/license/mit/), as described in the [LICENSE](LICENSE) file.
Built with [🦜️🔗 LangChain](https://github.com/hwchase17/langchain)

View File

@ -53,6 +53,15 @@ def get_single_conversation():
conversation = conversations_collection.find_one({"_id": ObjectId(conversation_id)})
return jsonify(conversation['queries'])
@user.route("/api/update_conversation_name", methods=["POST"])
def update_conversation_name():
# update data for a conversation
data = request.get_json()
id = data["id"]
name = data["name"]
conversations_collection.update_one({"_id": ObjectId(id)},{"$set":{"name":name}})
return {"status": "ok"}
@user.route("/api/feedback", methods=["POST"])
def api_feedback():
@ -75,6 +84,19 @@ def api_feedback():
)
return {"status": http.client.responses.get(response.status_code, "ok")}
@user.route("/api/delete_by_ids", methods=["get"])
def delete_by_ids():
"""Delete by ID. These are the IDs in the vectorstore"""
ids = request.args.get("path")
if not ids:
return {"status": "error"}
if settings.VECTOR_STORE == "faiss":
result = vectors_collection.delete_index(ids=ids)
if result:
return {"status": "ok"}
return {"status": "error"}
@user.route("/api/delete_old", methods=["get"])
def delete_old():

View File

@ -41,7 +41,7 @@ Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
kombu==5.2.4
langchain==0.0.308
langchain==0.0.312
loguru==0.6.0
lxml==4.9.2
MarkupSafe==2.1.2
@ -104,3 +104,4 @@ urllib3==1.26.17
vine==5.0.0
wcwidth==0.2.6
yarl==1.8.2
sentence-transformers==2.2.2

View File

@ -1,5 +1,5 @@
from application.vectorstore.base import BaseVectorStore
from langchain.vectorstores import FAISS
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
class FaissStore(BaseVectorStore):
@ -7,20 +7,40 @@ class FaissStore(BaseVectorStore):
def __init__(self, path, embeddings_key, docs_init=None):
super().__init__()
self.path = path
embeddings = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
if docs_init:
self.docsearch = FAISS.from_documents(
docs_init, self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
docs_init, embeddings
)
else:
self.docsearch = FAISS.load_local(
self.path, self._get_embeddings(settings.EMBEDDINGS_NAME, settings.EMBEDDINGS_KEY)
self.path, embeddings
)
self.assert_embedding_dimensions(embeddings)
def search(self, *args, **kwargs):
return self.docsearch.similarity_search(*args, **kwargs)
def add_texts(self, *args, **kwargs):
return self.docsearch.add_texts(*args, **kwargs)
def save_local(self, *args, **kwargs):
return self.docsearch.save_local(*args, **kwargs)
def delete_index(self, *args, **kwargs):
return self.docsearch.delete(*args, **kwargs)
def assert_embedding_dimensions(self, embeddings):
"""
Check that the word embedding dimension of the docsearch index matches
the dimension of the word embeddings used
"""
if settings.EMBEDDINGS_NAME == "huggingface_sentence-transformers/all-mpnet-base-v2":
try:
word_embedding_dimension = embeddings.client[1].word_embedding_dimension
except AttributeError as e:
raise AttributeError("word_embedding_dimension not found in embeddings.client[1]") from e
docsearch_index_dimension = self.docsearch.index.d
if word_embedding_dimension != docsearch_index_dimension:
raise ValueError(f"word_embedding_dimension ({word_embedding_dimension}) " +
f"!= docsearch_index_word_embedding_dimension ({docsearch_index_dimension})")

View File

@ -21,8 +21,7 @@ except FileExistsError:
def metadata_from_filename(title):
store = title.split('/')
store = store[1] + '/' + store[2]
store = '/'.join(title.split('/')[1:3])
return {'title': title, 'store': store}

View File

@ -1 +1,53 @@
# nextra-docsgpt
# nextra-docsgpt
## Setting Up Docs Folder of DocsGPT Locally
### 1. Clone the DocsGPT repository:
```
git clone https://github.com/arc53/DocsGPT.git
```
### 2. Navigate to the docs folder:
```
cd DocsGPT/docs
```
The docs folder contains the markdown files that make up the documentation. The majority of the files are in the pages directory. Some notable files in this folder include:
`index.mdx`: The main documentation file.
`_app.js`: This file is used to customize the default Next.js application shell.
`theme.config.jsx`: This file is for configuring the Nextra theme for the documentation.
### 3. Verify that you have Node.js and npm installed in your system. You can check by running:
```
node --version
npm --version
```
### 4. If not installed, download Node.js and npm from the respective official websites.
### 5. Once you have Node.js and npm running, proceed to install yarn - another package manager that helps to manage project dependencies:
```
npm install --global yarn
```
### 6. Install the project dependencies using yarn:
```
yarn install
```
### 7. After the successful installation of the project dependencies, start the local server:
```
yarn dev
```
- Now, you should be able to view the docs on your local environment by visiting `http://localhost:5000`. You can explore the different markdown files and make changes as you see fit.
- Footnotes: This guide assumes you have Node.js and npm installed. The guide involves running a local server using yarn, and viewing the documentation offline. If you encounter any issues, it may be worth verifying your Node.js and npm installations and whether you have installed yarn correctly.

View File

@ -1,4 +1,10 @@
{
"scripts":{
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"license": "MIT",
"dependencies": {
"@vercel/analytics": "^1.0.2",
"docsgpt": "^0.2.4",
@ -8,4 +14,4 @@
"react": "^18.2.0",
"react-dom": "^18.2.0"
}
}
}

View File

@ -4,45 +4,38 @@ Here's a step-by-step guide on how to setup an Amazon Lightsail instance to host
## Configuring your instance
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here).
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking [here](#connecting-to-your-newly-created-instance)).
### 1. Create an account or login to https://lightsail.aws.amazon.com
### 1. Create an AWS Account:
If you haven't already, create or log in to your AWS account at https://lightsail.aws.amazon.com.
### 2. Click on "Create instance"
### 2. Create an Instance:
### 3. Create your instance
a. Click "Create Instance."
The first step is to select the "Instance location". In most cases, there's no need to switch locations as the default one will work well.
b. Select the "Instance location." In most cases, the default location works fine.
After that, it is time to pick your Instance Image. We recommend using "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
c. Choose "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
As for instance plan, it'll vary depending on your unique demands, but a "1 GB, 1vCPU, 40GB SSD and 2TB transfer" setup should cover most scenarios.
d. Configure the instance plan based on your requirements. A "1 GB, 1vCPU, 40GB SSD, and 2TB transfer" setup is recommended for most scenarios.
Lastly, Identify your instance by giving it a unique name and then hit "Create instance".
e. Give your instance a unique name and click "Create Instance."
PS: Once you create your instance, it'll likely take a few minutes for the setup to be completed.
PS: It may take a few minutes for the instance setup to complete.
#### The recommended configuration is as follows:
### Connecting to Your newly created Instance
- Ubuntu 20.04 LTS
- 1GB RAM
- 1vCPU
- 40GB SSD Hard Drive
- 2TB transfer
Your instance will be ready a few minutes after creation. To access it, open the instance and click "Connect using SSH."
### Connecting to your newly created instance
#### Clone the DocsGPT Repository
Your instance will be ready for use a few minutes after being created. To access it, just open it up and click on "Connect using SSH".
#### Clone the repository
A terminal window will pop up, and the first step will be to clone the DocsGPT git repository:
A terminal window will pop up, and the first step will be to clone the DocsGPT Git repository:
`git clone https://github.com/arc53/DocsGPT.git`
#### Download the package information
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so simply enter the following command:
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
`sudo apt update`
@ -56,13 +49,13 @@ And now install docker-compose:
`sudo apt install docker-compose`
#### Access the DocsGPT folder
#### Access the DocsGPT Folder
Enter the following command to access the folder in which DocsGPT docker-compose file is present.
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
`cd DocsGPT/`
#### Prepare the environment
#### Prepare the Environment
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
@ -78,16 +71,16 @@ SELF_HOSTED_MODEL=false
To save the file, press CTRL+X, then Y, and then ENTER.
Next, we need to set a correct IP for our Backend. To do so, open the docker-compose.yml file:
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
`nano docker-compose.yml`
And change this line 7 `VITE_API_HOST=http://localhost:7091`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
This will allow the frontend to connect to the backend.
#### Running the app
#### Running the Application
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
@ -97,16 +90,19 @@ Launching it for the first time will take a few minutes to download all the nece
Once this is done you can go ahead and close the terminal window.
#### Enabling ports
#### Enabling Ports
Before you are able to access your live instance, you must first enable the port that it is using.
a. Before you are able to access your live instance, you must first enable the port that it is using.
Open your Lightsail instance and head to "Networking".
b. Open your Lightsail instance and head to "Networking".
Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
c. Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
Repeat the process for port `7091`.
#### Access your instance
Your instance will now be available under your Public IP Address and port `5173`. Enjoy!
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!
## Other Deployment Options
- [Deploy DocsGPT on Civo Compute Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c)

View File

@ -1,24 +1,107 @@
## Launching Web App
Note: Make sure you have Docker installed
**Note**: Make sure you have Docker installed
On Mac OS or Linux just write:
**On macOS or Linux:**
Just run the following command::
`./setup.sh`
It will install all the dependencies and give you an option to download the local model or use OpenAI
This command will install all the necessary dependencies and provide you with an option to download the local model or use OpenAI.
Otherwise, refer to this Guide:
If you prefer to follow manual steps, refer to this guide:
1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`.
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI api key](https://platform.openai.com/account/api-keys).
3. Run `docker-compose build && docker-compose up`.
1. Open and download this repository with
`git clone https://github.com/arc53/DocsGPT.git`.
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys).
3. Run the following commands:
`docker-compose build && docker-compose up`.
4. Navigate to `http://localhost:5173/`.
To stop just run `Ctrl + C`.
To stop, simply press Ctrl + C.
**For WINDOWS:**
To run the setup on Windows, you have two options: using the Windows Subsystem for Linux (WSL) or using Git Bash or Command Prompt.
**Option 1: Using Windows Subsystem for Linux (WSL):**
1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install).
2. After setting up WSL, open the WSL terminal.
3. Clone the repository and create the `.env` file:
```
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
`./run-with-docker-compose.sh`
5. Open your web browser and navigate to (http://localhost:5173/).
6. To stop the setup, just press `Ctrl + C` in the WSL terminal
**Option 2: Using Git Bash or Command Prompt (CMD):**
1. Install Git for Windows if you haven't already. Download it from the official website: (https://gitforwindows.org/).
2. Open Git Bash or Command Prompt.
3. Clone the repository and create the `.env` file:
```
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4.Run the following command to start the setup with Docker Compose:
`./run-with-docker-compose.sh`
5.Open your web browser and navigate to (http://localhost:5173/).
6.To stop the setup, just press Ctrl + C in the Git Bash or Command Prompt terminal.
These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt.
**Important:** Ensure that Docker is installed and properly configured on your Windows system for these steps to work.
For WINDOWS:
To run the given setup on Windows, you can use the Windows Subsystem for Linux (WSL) or a Git Bash terminal to execute similar commands. Here are the steps adapted for Windows:
Option 1: Using Windows Subsystem for Linux (WSL):
1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install).
2. After setting up WSL, open the WSL terminal.
3. Clone the repository and create the `.env` file:
```
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
`./run-with-docker-compose.sh`
5. Open your web browser and navigate to (http://localhost:5173/).
6. To stop the setup, just press `Ctrl + C` in the WSL terminal
Option 2: Using Git Bash or Command Prompt (CMD):
1. Install Git for Windows if you haven't already. You can download it from the official website: (https://gitforwindows.org/).
2. Open Git Bash or Command Prompt.
3. Clone the repository and create the `.env` file:
```
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4.Run the following command to start the setup with Docker Compose:
`./run-with-docker-compose.sh`
5.Open your web browser and navigate to (http://localhost:5173/).
6.To stop the setup, just press Ctrl + C in the Git Bash or Command Prompt terminal.
These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt. Make sure you have Docker installed and properly configured on your Windows system for this to work.
### Chrome Extension
To install the Chrome extension:
#### Installing the Chrome extension:
To enhance your DocsGPT experience, you can install the DocsGPT Chrome extension. Here's how:
1. In the DocsGPT GitHub repository, click on the "Code" button and select "Download ZIP".
2. Unzip the downloaded file to a location you can easily access.

View File

@ -1,9 +1,25 @@
Currently, the application provides the following main API endpoints:
# API Endpoints Documentation
### /api/answer
It's a POST request that sends a JSON in body with 4 values. It will receive an answer for a user provided question.
Here is a JavaScript fetch example:
*Currently, the application provides the following main API endpoints:*
### 1. /api/answer
**Description:**
This endpoint is used to request answers to user-provided questions.
**Request:**
Method: POST
Headers: Content-Type should be set to "application/json; charset=utf-8"
Request Body: JSON object with the following fields:
* **question:** The user's question
* **history:** (Optional) Previous conversation history
* **api_key:** Your API key
* **embeddings_key:** Your embeddings key
* **active_docs:** The location of active documentation
Here is a JavaScript Fetch Request example:
```js
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
@ -18,8 +34,9 @@ fetch("http://127.0.0.1:5000/api/answer", {
.then(console.log.bind(console))
```
In response you will get a json document like this one:
**Response**
In response, you will get a JSON document containing the answer,query and the result:
```json
{
"answer": " Hi there! How can I help you?\n",
@ -28,10 +45,17 @@ In response you will get a json document like this one:
}
```
### /api/docs_check
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
It's a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example:
### 2. /api/docs_check
**Description:**
This endpoint will make sure documentation is loaded on the server (just run it every time user is switching between libraries (documentations)).
**Request:**
Headers: Content-Type should be set to "application/json; charset=utf-8"
Request Body: JSON object with the field:
* **docs:** The location of the documentation
```js
// answer (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
@ -45,7 +69,9 @@ fetch("http://127.0.0.1:5000/api/docs_check", {
.then(console.log.bind(console))
```
In response you will get a json document like this one:
**Response:**
In response, you will get a JSON document like this one indicating whether the documentation exists or not.:
```json
{
"status": "exists"
@ -53,18 +79,36 @@ In response you will get a json document like this one:
```
### /api/combine
Provides json that tells UI which vectors are available and where they are located with a simple get request.
### 3. /api/combine
**Description:**
This endpoint provides information about available vectors and their locations with a simple GET request.
**Request:**
Method: GET
**Response:**
Response will include:
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.
Example of json in Docshub and local:
Example of JSON in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
### /api/upload
Uploads file that needs to be trained, response is json with task id, which can be used to check on tasks progress
### 4. /api/upload
**Description:**
This endpoint is used to upload a file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress.
**Request:**
Method: POST
Request Body: A multipart/form-data form with file upload and additional fields, including "user" and "name."
HTML example:
```html
@ -79,20 +123,24 @@ HTML example:
</form>
```
Response:
```json
{
"status": "ok",
"task_id": "b2684988-9047-428b-bd47-08518679103c"
}
**Response:**
```
JSON response with a status and a task ID that can be used to check the task's progress.
### /api/task_status
Gets task status (`task_id`) from `/api/upload`:
### 5. /api/task_status
**Description:**
This endpoint is used to get the status of a task (`task_id`) from `/api/upload`
**Request:**
Method: GET
Query Parameter: task_id (task ID to check)
**Sample JavaScript Fetch Request:**
```js
// Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
fetch("http://localhost:5001/api/task_status?task_id=YOUR_TASK_ID", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
@ -102,9 +150,12 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
.then(console.log.bind(console))
```
Responses:
**Response:**
There are two types of responses:
1. while task it still running, where "current" will show progress from 0 to 100
1. While the task is still running, the 'current' value will show progress from 0 to 100.
```json
{
"result": {
@ -114,7 +165,7 @@ There are two types of responses:
}
```
2. When task is completed
2. When task is completed:
```json
{
"result": {
@ -132,8 +183,14 @@ There are two types of responses:
}
```
### /api/delete_old
Deletes old vectorstores:
### 6. /api/delete_old
**Description:**
This endpoint is used to delete old Vector Stores.
**Request:**
Method: GET
```js
// Task status (GET http://127.0.0.1:5000/api/docs_check)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
@ -144,10 +201,11 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
Response:
JSON response indicating the status of the operation.
```json
{ "status": "ok" }
```

View File

@ -1,29 +1,42 @@
### To start chatwoot extension:
1. Prepare and start the DocsGPT itself (load your documentation too). Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data.
2. Go to chatwoot, **Navigate** to your profile (bottom left), click on profile settings, scroll to the bottom and copy **Access Token**.
3. Navigate to `/extensions/chatwoot`. Copy `.env_sample` and create `.env` file.
4. Fill in the values.
### To Start Chatwoot Extension:
```
docsgpt_url=<docsgpt_api_url>
chatwoot_url=<chatwoot_url>
docsgpt_key=<openai_api_key or other llm key>
chatwoot_token=<from part 2>
```
1. **Prepare and Start DocsGPT:**
- Launch DocsGPT using the instructions in our [wiki](https://github.com/arc53/DocsGPT/wiki).
- Make sure to load your documentation.
5. Start with `flask run` command.
2. **Get Access Token from Chatwoot:**
- Navigate to Chatwoot.
- Go to your profile (bottom left), click on profile settings.
- Scroll to the bottom and copy the **Access Token**.
If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation.
3. **Set Up Chatwoot Extension:**
- Navigate to `/extensions/chatwoot`.
- Copy `.env_sample` and create a `.env` file.
- Fill in the values in the `.env` file:
```env
docsgpt_url=<docsgpt_api_url>
chatwoot_url=<chatwoot_url>
docsgpt_key=<openai_api_key or other llm key>
chatwoot_token=<from part 2>
```
### Optional (extra validation)
In `app.py` uncomment lines 12-13 and 71-75
4. **Start the Extension:**
- Use the command `flask run` to start the extension.
in your `.env` file add:
5. **Optional: Extra Validation**
- In `app.py`, uncomment lines 12-13 and 71-75.
- Add the following lines to your `.env` file:
```
account_id=(optional) 1
assignee_id=(optional) 1
```
```env
account_id=(optional) 1
assignee_id=(optional) 1
```
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.
These Chatwoot values help ensure you respond to the correct widget and handle questions assigned to a specific user.
### Stopping Bot Responses for Specific User or Session:
- If you want the bot to stop responding to questions for a specific user or session, add a label `human-requested` in your conversation.
### Additional Notes:
- For further details on training on other documentation, refer to our [wiki](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation).

View File

@ -4,7 +4,7 @@
Got to your project and install a new dependency: `npm install docsgpt`.
### Usage
Go to your project and in the file where you want to use the widget import it:
Go to your project and in the file where you want to use the widget, import it:
```js
import { DocsGPTWidget } from "docsgpt";
import "docsgpt/dist/style.css";
@ -14,12 +14,12 @@ import "docsgpt/dist/style.css";
Then you can use it like this: `<DocsGPTWidget />`
DocsGPTWidget takes 3 props:
- `apiHost`url of your DocsGPT API.
- `selectDocs` — documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`).
- `apiKey` — usually its empty.
- `apiHost`URL of your DocsGPT API.
- `selectDocs` — documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`).
- `apiKey` — usually it's empty.
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install you widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
```js
import { DocsGPTWidget } from "docsgpt";
import "docsgpt/dist/style.css";

View File

@ -17,8 +17,7 @@ When using code examples, use the following format:
(code)
{summaries}
Thank you
```

View File

@ -5,18 +5,18 @@ This AI can use any documentation, but first it needs to be prepared for similar
Start by going to `/scripts/` folder.
If you open this file you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
If you open this file, you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$.
It currently uses OPEN_AI to create the vector store, so make sure your documentation is not too big. Pandas cost me around $3-$4.
You can usually find documentation on github in `docs/` folder for most open-source projects.
You can usually find documentation on Github in `docs/` folder for most open-source projects.
### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
Name it `inputs/`
Put all your .rst/.md files in there
The search is recursive, so you don't need to flatten them
- Name it `inputs/`
- Put all your .rst/.md files in there
- The search is recursive, so you don't need to flatten them
If there are no .rst/.md files just convert whatever you find to txt and feed it. (don't forget to change the extension in script)
If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script)
### 2. Create .env file in `scripts/` folder
And write your OpenAI API key inside
@ -32,7 +32,7 @@ It will tell you how much it will cost
### 5. Run web app
Once you run it will use new context that is relevant to your documentation
Once you run it will use new context that is relevant to your documentation
Make sure you select default in the dropdown in the UI
## Customization
@ -41,7 +41,7 @@ You can learn more about options while running ingest.py by running:
`python ingest.py --help`
| Options | |
|:--------------------------------:|:------------------------------------------------------------------------------------------------------------------------------:|
| **ingest** | Runs 'ingest' function converting documentation to Faiss plus Index format |
| **ingest** | Runs 'ingest' function, converting documentation to Faiss plus Index format |
| --dir TEXT | List of paths to directory for index creation. E.g. --dir inputs --dir inputs2 [default: inputs] |
| --file TEXT | File paths to use (Optional; overrides directory) E.g. --files inputs/1.md --files inputs/2.md |
| --recursive / --no-recursive | Whether to recursively search in subdirectories [default: recursive] |
@ -56,4 +56,4 @@ You can learn more about options while running ingest.py by running:
| | |
| **convert** | Creates documentation in .md format from source code |
| --dir TEXT | Path to a directory with source code. E.g. --dir inputs [default: inputs] |
| --formats TEXT | Source code language from which to create documentation. Supports py, js and java. E.g. --formats py [default: py] |
| --formats TEXT | Source code language from which to create documentation. Supports py, js and java. E.g. --formats py [default: py] |

View File

@ -1,4 +1,4 @@
Fortunately there are many providers for LLM's and some of them can even be ran locally
Fortunately, there are many providers for LLM's and some of them can even be run locally
There are two models used in the app:
1. Embeddings.
@ -21,12 +21,16 @@ By default, we use OpenAI's models but if you want to change it or even run it l
You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`.
Options:
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon)
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon, llama.cpp)
EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium)
If using Llama, set the `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2` and be sure to download [this model](https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf) into the `models/` folder: `https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf`.
Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choose option 1 when prompted. You do not need to manually add the DocsGPT model mentioned above to your `models/` folder if you use `setup.sh`, as the script will manage that step for you.
That's it!
### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and don't want anything to leave your premises.
Make sure you set `SELF_HOSTED_MODEL` as true in you `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.

View File

@ -1,10 +1,10 @@
If your AI uses external knowledge and is not explicit enough it is ok, because we try to make docsgpt friendly.
If your AI uses external knowledge and is not explicit enough, it is ok, because we try to make DocsGPT friendly.
But if you want to adjust it, here is a simple way.
But if you want to adjust it, here is a simple way:-
Got to `application/prompts/chat_combine_prompt.txt`
- Got to `application/prompts/chat_combine_prompt.txt`
And change it to
- And change it to
```

View File

@ -1,3 +1,3 @@
# Please put appropriate value
VITE_API_HOST=http://localhost:7091
VITE_API_HOST=http://0.0.0.0:7091
VITE_API_STREAMING=true

View File

@ -11,6 +11,7 @@
"@reduxjs/toolkit": "^1.9.2",
"@vercel/analytics": "^0.1.10",
"react": "^18.2.0",
"react-copy-to-clipboard": "^5.1.0",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.3",
"react-markdown": "^8.0.7",
@ -60,12 +61,13 @@
}
},
"node_modules/@babel/code-frame": {
"version": "7.18.6",
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.18.6.tgz",
"integrity": "sha512-TDCmlK5eOvH+eH7cdAFlNXeVJqWIQ7gW9tY1GJIpUtFb6CmjVyq2VM3u71bOyR8CRihcCgMUYoDNyLXao3+70Q==",
"version": "7.22.13",
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.22.13.tgz",
"integrity": "sha512-XktuhWlJ5g+3TJXc5upd9Ks1HutSArik6jf2eAjYFyIOf4ej3RN+184cZbzDvbPnuTJIUhPKKJE3cIsYTiAT3w==",
"dev": true,
"dependencies": {
"@babel/highlight": "^7.18.6"
"@babel/highlight": "^7.22.13",
"chalk": "^2.4.2"
},
"engines": {
"node": ">=6.9.0"
@ -111,13 +113,14 @@
}
},
"node_modules/@babel/generator": {
"version": "7.20.14",
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.20.14.tgz",
"integrity": "sha512-AEmuXHdcD3A52HHXxaTmYlb8q/xMEhoRP67B3T4Oq7lbmSoqroMZzjnGj3+i1io3pdnF8iBYVu4Ilj+c4hBxYg==",
"version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.0.tgz",
"integrity": "sha512-lN85QRR+5IbYrMWM6Y4pE/noaQtg4pNiqeNGX60eqOfo6gtEj6uw/JagelB8vVztSd7R6M5n1+PQkDbHbBRU4g==",
"dev": true,
"dependencies": {
"@babel/types": "^7.20.7",
"@babel/types": "^7.23.0",
"@jridgewell/gen-mapping": "^0.3.2",
"@jridgewell/trace-mapping": "^0.3.17",
"jsesc": "^2.5.1"
},
"engines": {
@ -158,34 +161,34 @@
}
},
"node_modules/@babel/helper-environment-visitor": {
"version": "7.18.9",
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.18.9.tgz",
"integrity": "sha512-3r/aACDJ3fhQ/EVgFy0hpj8oHyHpQc+LPtJoY9SzTThAsStm4Ptegq92vqKoE3vD706ZVFWITnMnxucw+S9Ipg==",
"version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz",
"integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==",
"dev": true,
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@babel/helper-function-name": {
"version": "7.19.0",
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.19.0.tgz",
"integrity": "sha512-WAwHBINyrpqywkUH0nTnNgI5ina5TFn85HKS0pbPDfxFfhyR/aNQEn4hGi1P1JyT//I0t4OgXUlofzWILRvS5w==",
"version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz",
"integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==",
"dev": true,
"dependencies": {
"@babel/template": "^7.18.10",
"@babel/types": "^7.19.0"
"@babel/template": "^7.22.15",
"@babel/types": "^7.23.0"
},
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@babel/helper-hoist-variables": {
"version": "7.18.6",
"resolved": "https://registry.npmjs.org/@babel/helper-hoist-variables/-/helper-hoist-variables-7.18.6.tgz",
"integrity": "sha512-UlJQPkFqFULIcyW5sbzgbkxn2FKRgwWiRexcuaR8RNJRy8+LLveqPjwZV/bwrLZCN0eUHD/x8D0heK1ozuoo6Q==",
"version": "7.22.5",
"resolved": "https://registry.npmjs.org/@babel/helper-hoist-variables/-/helper-hoist-variables-7.22.5.tgz",
"integrity": "sha512-wGjk9QZVzvknA6yKIUURb8zY3grXCcOZt+/7Wcy8O2uctxhplmUPkOdlgoNhmdVee2c92JXbf1xpMtVNbfoxRw==",
"dev": true,
"dependencies": {
"@babel/types": "^7.18.6"
"@babel/types": "^7.22.5"
},
"engines": {
"node": ">=6.9.0"
@ -244,30 +247,30 @@
}
},
"node_modules/@babel/helper-split-export-declaration": {
"version": "7.18.6",
"resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.18.6.tgz",
"integrity": "sha512-bde1etTx6ZyTmobl9LLMMQsaizFVZrquTEHOqKeQESMKo4PlObf+8+JA25ZsIpZhT/WEd39+vOdLXAFG/nELpA==",
"version": "7.22.6",
"resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.22.6.tgz",
"integrity": "sha512-AsUnxuLhRYsisFiaJwvp1QF+I3KjD5FOxut14q/GzovUe6orHLesW2C7d754kRm53h5gqrz6sFl6sxc4BVtE/g==",
"dev": true,
"dependencies": {
"@babel/types": "^7.18.6"
"@babel/types": "^7.22.5"
},
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@babel/helper-string-parser": {
"version": "7.19.4",
"resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.19.4.tgz",
"integrity": "sha512-nHtDoQcuqFmwYNYPz3Rah5ph2p8PFeFCsZk9A/48dPc/rGocJ5J3hAAZ7pb76VWX3fZKu+uEr/FhH5jLx7umrw==",
"version": "7.22.5",
"resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.22.5.tgz",
"integrity": "sha512-mM4COjgZox8U+JcXQwPijIZLElkgEpO5rsERVDJTc2qfCDfERyob6k5WegS14SX18IIjv+XD+GrqNumY5JRCDw==",
"dev": true,
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@babel/helper-validator-identifier": {
"version": "7.19.1",
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.19.1.tgz",
"integrity": "sha512-awrNfaMtnHUr653GgGEs++LlAvW6w+DcPrOliSMXWCKo597CwL5Acf/wWdNkf/tfEQE3mjkeD1YOVZOUV/od1w==",
"version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz",
"integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==",
"dev": true,
"engines": {
"node": ">=6.9.0"
@ -297,13 +300,13 @@
}
},
"node_modules/@babel/highlight": {
"version": "7.18.6",
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.18.6.tgz",
"integrity": "sha512-u7stbOuYjaPezCuLj29hNW1v64M2Md2qupEKP1fHc7WdOA3DgLh37suiSrZYY7haUB7iBeQZ9P1uiRF359do3g==",
"version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.22.20.tgz",
"integrity": "sha512-dkdMCN3py0+ksCgYmGG8jKeGA/8Tk+gJwSYYlFGxG5lmhfKNoAy004YpLxpS1W2J8m/EK2Ew+yOs9pVRwO89mg==",
"dev": true,
"dependencies": {
"@babel/helper-validator-identifier": "^7.18.6",
"chalk": "^2.0.0",
"@babel/helper-validator-identifier": "^7.22.20",
"chalk": "^2.4.2",
"js-tokens": "^4.0.0"
},
"engines": {
@ -311,9 +314,9 @@
}
},
"node_modules/@babel/parser": {
"version": "7.20.15",
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.20.15.tgz",
"integrity": "sha512-DI4a1oZuf8wC+oAJA9RW6ga3Zbe8RZFt7kD9i4qAspz3I/yHet1VvC3DiSy/fsUvv5pvJuNPh0LPOdCcqinDPg==",
"version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.0.tgz",
"integrity": "sha512-vvPKKdMemU85V9WE/l5wZEmImpCtLqbnTvqDS2U1fJ96KrxoW7KrXhNsNCblQlg8Ck4b85yxdTyelsMUgFUXiw==",
"dev": true,
"bin": {
"parser": "bin/babel-parser.js"
@ -364,33 +367,33 @@
}
},
"node_modules/@babel/template": {
"version": "7.20.7",
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.20.7.tgz",
"integrity": "sha512-8SegXApWe6VoNw0r9JHpSteLKTpTiLZ4rMlGIm9JQ18KiCtyQiAMEazujAHrUS5flrcqYZa75ukev3P6QmUwUw==",
"version": "7.22.15",
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz",
"integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==",
"dev": true,
"dependencies": {
"@babel/code-frame": "^7.18.6",
"@babel/parser": "^7.20.7",
"@babel/types": "^7.20.7"
"@babel/code-frame": "^7.22.13",
"@babel/parser": "^7.22.15",
"@babel/types": "^7.22.15"
},
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@babel/traverse": {
"version": "7.20.13",
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.20.13.tgz",
"integrity": "sha512-kMJXfF0T6DIS9E8cgdLCSAL+cuCK+YEZHWiLK0SXpTo8YRj5lpJu3CDNKiIBCne4m9hhTIqUg6SYTAI39tAiVQ==",
"version": "7.23.2",
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.2.tgz",
"integrity": "sha512-azpe59SQ48qG6nu2CzcMLbxUudtN+dOM9kDbUqGq3HXUJRlo7i8fvPoxQUzYgLZ4cMVmuZgm8vvBpNeRhd6XSw==",
"dev": true,
"dependencies": {
"@babel/code-frame": "^7.18.6",
"@babel/generator": "^7.20.7",
"@babel/helper-environment-visitor": "^7.18.9",
"@babel/helper-function-name": "^7.19.0",
"@babel/helper-hoist-variables": "^7.18.6",
"@babel/helper-split-export-declaration": "^7.18.6",
"@babel/parser": "^7.20.13",
"@babel/types": "^7.20.7",
"@babel/code-frame": "^7.22.13",
"@babel/generator": "^7.23.0",
"@babel/helper-environment-visitor": "^7.22.20",
"@babel/helper-function-name": "^7.23.0",
"@babel/helper-hoist-variables": "^7.22.5",
"@babel/helper-split-export-declaration": "^7.22.6",
"@babel/parser": "^7.23.0",
"@babel/types": "^7.23.0",
"debug": "^4.1.0",
"globals": "^11.1.0"
},
@ -399,13 +402,13 @@
}
},
"node_modules/@babel/types": {
"version": "7.20.7",
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.20.7.tgz",
"integrity": "sha512-69OnhBxSSgK0OzTJai4kyPDiKTIe3j+ctaHdIGVbRahTLAT7L3R9oeXHC2aVSuGYt3cVnoAMDmOCgJ2yaiLMvg==",
"version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.0.tgz",
"integrity": "sha512-0oIyUfKoI3mSqMvsxBdclDwxXKXAUA8v/apZbc+iSyARYou1o8ZGDxbUYyLFoW2arqS2jDGqJuZvv1d/io1axg==",
"dev": true,
"dependencies": {
"@babel/helper-string-parser": "^7.19.4",
"@babel/helper-validator-identifier": "^7.19.1",
"@babel/helper-string-parser": "^7.22.5",
"@babel/helper-validator-identifier": "^7.22.20",
"to-fast-properties": "^2.0.0"
},
"engines": {
@ -2248,6 +2251,14 @@
"integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==",
"dev": true
},
"node_modules/copy-to-clipboard": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/copy-to-clipboard/-/copy-to-clipboard-3.3.3.tgz",
"integrity": "sha512-2KV8NhB5JqC3ky0r9PMCAZKbUHSwtEo4CwCs0KXgruG43gX5PMqDEBbVU4OUzw2MuAWUfsuFmWvEKG5QRfSnJA==",
"dependencies": {
"toggle-selection": "^1.0.6"
}
},
"node_modules/cosmiconfig": {
"version": "7.1.0",
"resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-7.1.0.tgz",
@ -6072,6 +6083,18 @@
"node": ">=0.10.0"
}
},
"node_modules/react-copy-to-clipboard": {
"version": "5.1.0",
"resolved": "https://registry.npmjs.org/react-copy-to-clipboard/-/react-copy-to-clipboard-5.1.0.tgz",
"integrity": "sha512-k61RsNgAayIJNoy9yDsYzDe/yAZAzEbEgcz3DZMhF686LEyukcE1hzurxe85JandPUG+yTfGVFzuEw3xt8WP/A==",
"dependencies": {
"copy-to-clipboard": "^3.3.1",
"prop-types": "^15.8.1"
},
"peerDependencies": {
"react": "^15.3.0 || 16 || 17 || 18"
}
},
"node_modules/react-dom": {
"version": "18.2.0",
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-18.2.0.tgz",
@ -6907,6 +6930,11 @@
"node": ">=8.0"
}
},
"node_modules/toggle-selection": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/toggle-selection/-/toggle-selection-1.0.6.tgz",
"integrity": "sha512-BiZS+C1OS8g/q2RRbJmy59xpyghNBqrr6k5L/uKBGRsTfxmu3ffiRnd8mlGPUVayg8pvfi5urfnu8TU7DVOkLQ=="
},
"node_modules/trim-lines": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/trim-lines/-/trim-lines-3.0.1.tgz",

View File

@ -22,6 +22,7 @@
"@reduxjs/toolkit": "^1.9.2",
"@vercel/analytics": "^0.1.10",
"react": "^18.2.0",
"react-copy-to-clipboard": "^5.1.0",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.3",
"react-markdown": "^8.0.7",

View File

@ -4,7 +4,7 @@
export default function About() {
return (
<div className="mx-5 grid min-h-screen md:mx-36">
<article className=" place-items-left mx-auto my-auto flex w-full max-w-6xl flex-col gap-6 rounded-3xl bg-gray-100 p-6 pt-14 pb-14 text-jet lg:p-10 xl:p-16">
<article className="place-items-left mx-auto my-auto mt-20 flex w-full max-w-6xl flex-col gap-6 rounded-3xl bg-gray-100 p-6 text-jet lg:p-10 xl:p-16">
<div className="flex items-center">
<p className="mr-2 text-3xl">About DocsGPT</p>
<p className="text-[21px]">🦖</p>
@ -51,9 +51,11 @@ export default function About() {
</div>
<p>
Currently It uses <span className= "text-blue-950 font-medium">DocsGPT</span> documentation, so it will respond to
information relevant to <span className= "text-blue-950 font-medium">DocsGPT</span> . If you want to train it on different
documentation - please follow
Currently It uses{' '}
<span className="text-blue-950 font-medium">DocsGPT</span>{' '}
documentation, so it will respond to information relevant to{' '}
<span className="text-blue-950 font-medium">DocsGPT</span> . If you
want to train it on different documentation - please follow
<a
className="text-blue-500"
href="https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation"

View File

@ -18,9 +18,7 @@ export default function App() {
<div
className={`transition-all duration-200 ${
!isMobile
? `ml-0 ${
!navOpen ? '-mt-5 md:mx-auto lg:mx-auto' : 'md:ml-72 lg:ml-60'
}`
? `ml-0 ${!navOpen ? '-mt-5 md:mx-auto lg:mx-auto' : 'md:ml-72'}`
: 'ml-0 md:ml-16'
}`}
>

View File

@ -5,26 +5,28 @@ export default function Hero({ className = '' }: { className?: string }) {
<p className="mr-2 text-4xl font-semibold">DocsGPT</p>
<p className="text-[27px]">🦖</p>
</div>
<p className="mb-2 text-center text-sm font-normal leading-6 text-black-1000">
<p className="mb-3 text-center leading-6 text-black-1000">
Welcome to DocsGPT, your technical documentation assistant!
</p>
<p className="mb-2 text-center text-sm font-normal leading-6 text-black-1000 ">
<p className="mb-3 text-center leading-6 text-black-1000">
Enter a query related to the information in the documentation you
selected to receive and we will provide you with the most relevant
answers.
selected to receive
<br /> and we will provide you with the most relevant answers.
</p>
<p className="mb-2 text-center text-sm font-normal leading-6 text-black-1000 ">
<p className="mb-3 text-center leading-6 text-black-1000">
Start by entering your query in the input field below and we will do the
rest!
</p>
<div className="sections mt-7 flex flex-wrap items-center justify-center gap-1 sm:gap-1 md:gap-0 ">
<div className=" rounded-[28px] bg-gradient-to-l from-[#6EE7B7]/70 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tr-none md:rounded-br-none">
<div className="h-full rounded-[25px] bg-white p-6 md:rounded-tr-none md:rounded-br-none">
<img src="/message-text.svg" alt="lock" className="h-7 w-7" />
<h2 className="mt-2 mb-5 text-base font-normal leading-6">
Chat with Your Data
</h2>
<p className="w-[300px] text-sm font-normal leading-6 text-black-1000">
<div className="sections mt-1 flex flex-wrap items-center justify-center gap-1 sm:gap-1 md:gap-0 ">
<div className=" rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/70 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tr-none md:rounded-br-none">
<div className="h-full rounded-[45px] bg-white p-6 md:rounded-tr-none md:rounded-br-none">
<img
src="/message-text.svg"
alt="lock"
className="h-[24px] w-[24px]"
/>
<h2 className="mt-2 mb-3 text-lg font-bold">Chat with Your Data</h2>
<p className="w-[250px] text-xs text-gray-500">
DocsGPT will use your data to answer questions. Whether its
documentation, source code, or Microsoft files, DocsGPT allows you
to have interactive conversations and find answers based on the
@ -33,13 +35,11 @@ export default function Hero({ className = '' }: { className?: string }) {
</div>
</div>
<div className=" rounded-[28px] bg-gradient-to-r from-[#6EE7B7]/70 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-none md:py-1 md:px-0">
<div className="rounded-[25px] bg-white px-6 py-6 md:rounded-none">
<img src="/lock.svg" alt="lock" className="h-7 w-7" />
<h2 className="mt-2 mb-5 text-base font-normal leading-6">
Secure Data Storage
</h2>
<p className=" w-[300px] text-sm font-normal leading-6 text-black-1000">
<div className=" rounded-[50px] bg-gradient-to-r from-[#6EE7B7]/70 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-none md:py-1 md:px-0">
<div className="rounded-[45px] bg-white px-6 py-4 md:rounded-none">
<img src="/lock.svg" alt="lock" className="h-[24px] w-[24px]" />
<h2 className="mt-2 mb-3 text-lg font-bold">Secure Data Storage</h2>
<p className=" w-[250px] text-xs text-gray-500">
The security of your data is our top priority. DocsGPT ensures the
utmost protection for your sensitive information. With secure data
storage and privacy measures in place, you can trust that your
@ -47,17 +47,15 @@ export default function Hero({ className = '' }: { className?: string }) {
</p>
</div>
</div>
<div className=" rounded-[28px] bg-gradient-to-l from-[#6EE7B7]/80 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tl-none md:rounded-bl-none">
<div className="rounded-[25px] bg-white p-6 px-6 lg:rounded-tl-none lg:rounded-bl-none">
<div className=" rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/80 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tl-none md:rounded-bl-none">
<div className="rounded-[45px] bg-white p-6 px-6 lg:rounded-tl-none lg:rounded-bl-none">
<img
src="/message-programming.svg"
alt="lock"
className="h-7 w-7"
className="h-[24px] w-[24px]"
/>
<h2 className="mt-2 mb-5 text-base font-normal leading-6">
Open Source Code
</h2>
<p className=" w-[300px] text-sm font-normal leading-6 text-black-1000">
<h2 className="mt-2 mb-3 text-lg font-bold">Open Source Code</h2>
<p className=" w-[250px] text-xs text-gray-500">
DocsGPT is built on open source principles, promoting transparency
and collaboration. The source code is freely available, enabling
developers to contribute, enhance, and customize the app to meet

View File

@ -32,6 +32,7 @@ import { useMediaQuery, useOutsideAlerter } from './hooks';
import Upload from './upload/Upload';
import { Doc, getConversations } from './preferences/preferenceApi';
import SelectDocsModal from './preferences/SelectDocsModal';
import ConversationTile from './conversation/ConversationTile';
interface NavigationProps {
navOpen: boolean;
@ -68,27 +69,26 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
useEffect(() => {
if (!conversations) {
getConversations()
.then((fetchedConversations) => {
dispatch(setConversations(fetchedConversations));
})
.catch((error) => {
console.error('Failed to fetch conversations: ', error);
});
fetchConversations();
}
}, [conversations, dispatch]);
async function fetchConversations() {
return await getConversations()
.then((fetchedConversations) => {
dispatch(setConversations(fetchedConversations));
})
.catch((error) => {
console.error('Failed to fetch conversations: ', error);
});
}
const handleDeleteConversation = (id: string) => {
fetch(`${apiHost}/api/delete_conversation?id=${id}`, {
method: 'POST',
})
.then(() => {
// remove the image element from the DOM
const imageElement = document.querySelector(
`#img-${id}`,
) as HTMLElement;
const parentElement = imageElement.parentNode as HTMLElement;
parentElement.parentNode?.removeChild(parentElement);
fetchConversations();
})
.catch((error) => console.error(error));
};
@ -126,6 +126,29 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
);
});
};
async function updateConversationName(updatedConversation: {
name: string;
id: string;
}) {
await fetch(`${apiHost}/api/update_conversation_name`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(updatedConversation),
})
.then((response) => response.json())
.then((data) => {
if (data) {
navigate('/');
fetchConversations();
}
})
.catch((err) => {
console.error(err);
});
}
useOutsideAlerter(
navRef,
() => {
@ -142,11 +165,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
*/
useEffect(() => {
if (isMobile) {
setNavOpen(false);
return;
}
setNavOpen(true);
setNavOpen(!isMobile);
}, [isMobile]);
return (
@ -171,7 +190,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
ref={navRef}
className={`${
!navOpen && '-ml-96 md:-ml-[18rem]'
} duration-20 fixed z-20 flex h-full w-72 flex-col border-r-2 bg-gray-50 transition-all`}
} duration-20 fixed top-0 z-20 flex h-full w-72 flex-col border-r-2 bg-gray-50 transition-all`}
>
<div className={'visible h-16 w-full border-b-2 md:h-12'}>
<button
@ -206,53 +225,25 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
}
>
<img src={Message} className="ml-4 w-5"></img>
<p className="my-auto text-eerie-black">New Chat</p>
<p className="my-auto text-sm text-eerie-black">New Chat</p>
</NavLink>
<div className="conversations-container max-h-[25rem] overflow-y-auto">
{conversations
? conversations.map((conversation) => {
return (
<div
key={conversation.id}
onClick={() => {
handleConversationClick(conversation.id);
}}
className={`my-auto mx-4 mt-4 flex h-12 cursor-pointer items-center justify-between gap-4 rounded-3xl hover:bg-gray-100 ${
conversationId === conversation.id ? 'bg-gray-100' : ''
}`}
>
<div className="flex gap-4">
<img src={Message} className="ml-2 w-5"></img>
<p className="my-auto text-eerie-black">
{conversation.name.length > 45
? conversation.name.substring(0, 45) + '...'
: conversation.name}
</p>
</div>
{conversationId === conversation.id ? (
<img
src={Exit}
alt="Exit"
className="mr-4 h-3 w-3 cursor-pointer hover:opacity-50"
id={`img-${conversation.id}`}
onClick={(event) => {
event.stopPropagation();
handleDeleteConversation(conversation.id);
}}
/>
) : null}
</div>
);
})
: null}
{conversations?.map((conversation) => (
<ConversationTile
key={conversation.id}
conversation={conversation}
selectConversation={(id) => handleConversationClick(id)}
onDeleteConversation={(id) => handleDeleteConversation(id)}
onSave={(conversation) => updateConversationName(conversation)}
/>
))}
</div>
<div className="flex-grow border-b-2 border-gray-100"></div>
<div className="flex flex-col-reverse border-b-2">
<div className="relative my-4 flex gap-2 px-2">
<div
className="flex h-12 min-w-[85%] cursor-pointer justify-between rounded-3xl rounded-md border-2 bg-white"
className="flex h-12 w-full cursor-pointer justify-between rounded-3xl border-2 bg-white"
onClick={() => setIsDocsListOpen(!isDocsListOpen)}
>
{selectedDocs && (
@ -264,7 +255,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
src={Arrow2}
alt="arrow"
className={`${
isDocsListOpen ? 'rotate-0' : 'rotate-180'
!isDocsListOpen ? 'rotate-0' : 'rotate-180'
} ml-auto mr-3 w-3 transition-all`}
/>
</div>
@ -290,7 +281,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
<p className="ml-5 flex-1 overflow-hidden overflow-ellipsis whitespace-nowrap py-3">
{doc.name} {doc.version}
</p>
{doc.location === 'local' ? (
{doc.location === 'local' && (
<img
src={Exit}
alt="Exit"
@ -301,7 +292,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
handleDeleteClick(index, doc);
}}
/>
) : null}
)}
</div>
);
}

View File

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="11" viewBox="0 0 14 11" fill="none">
<path d="M4.95919 10.1906C4.84318 10.1902 4.72847 10.166 4.62222 10.1194C4.51596 10.0729 4.42041 10.0049 4.34152 9.91985L0.229353 5.54538C0.0756344 5.38157 -0.00671208 5.1634 0.000428491 4.93886C0.00756906 4.71433 0.103612 4.50183 0.267428 4.34812C0.431245 4.1944 0.649417 4.11205 0.873948 4.11919C1.09848 4.12633 1.31098 4.22238 1.4647 4.38619L4.95073 8.10068L12.0666 0.316329C12.1389 0.226405 12.2287 0.152193 12.3306 0.0982513C12.4326 0.0443098 12.5445 0.0117775 12.6594 0.00265255C12.7744 -0.00647237 12.89 0.00800286 12.9992 0.045189C13.1084 0.082375 13.2088 0.141487 13.2943 0.218894C13.3798 0.296301 13.4485 0.390369 13.4964 0.49532C13.5442 0.600272 13.57 0.713891 13.5723 0.829198C13.5746 0.944506 13.5534 1.05907 13.5098 1.16585C13.4662 1.27263 13.4012 1.36937 13.3189 1.45014L5.58533 9.91139C5.50718 9.998 5.41197 10.0675 5.30567 10.1156C5.19938 10.1636 5.0843 10.1892 4.96766 10.1906H4.95919Z" fill="#747474"/>
</svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="11" viewBox="0 0 14 11" fill="none">
<path d="M4.95919 10.1906C4.84318 10.1902 4.72847 10.166 4.62222 10.1194C4.51596 10.0729 4.42041 10.0049 4.34152 9.91985L0.229353 5.54538C0.0756344 5.38157 -0.00671208 5.1634 0.000428491 4.93886C0.00756906 4.71433 0.103612 4.50183 0.267428 4.34812C0.431245 4.1944 0.649417 4.11205 0.873948 4.11919C1.09848 4.12633 1.31098 4.22238 1.4647 4.38619L4.95073 8.10068L12.0666 0.316329C12.1389 0.226405 12.2287 0.152193 12.3306 0.0982513C12.4326 0.0443098 12.5445 0.0117775 12.6594 0.00265255C12.7744 -0.00647237 12.89 0.00800286 12.9992 0.045189C13.1084 0.082375 13.2088 0.141487 13.2943 0.218894C13.3798 0.296301 13.4485 0.390369 13.4964 0.49532C13.5442 0.600272 13.57 0.713891 13.5723 0.829198C13.5746 0.944506 13.5534 1.05907 13.5098 1.16585C13.4662 1.27263 13.4012 1.36937 13.3189 1.45014L5.58533 9.91139C5.50718 9.998 5.41197 10.0675 5.30567 10.1156C5.19938 10.1636 5.0843 10.1892 4.96766 10.1906H4.95919Z" fill="#747474"/>
</svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

@ -0,0 +1,3 @@
<svg width="14" height="17" stroke-width="1.15" viewBox="0 0 14 17" >
<path d="M13.8013 5.01282L8.80645 0.191795C8.67953 0.0691399 8.50734 0.000152609 8.32774 0H6.09677C5.43801 0 4.80623 0.252586 4.34041 0.702193C3.8746 1.1518 3.6129 1.7616 3.6129 2.39744V3.48718H2.48387C1.82511 3.48718 1.19332 3.73977 0.727509 4.18937C0.261693 4.63898 0 5.24878 0 5.88462V14.6026C0 15.2384 0.261693 15.8482 0.727509 16.2978C1.19332 16.7474 1.82511 17 2.48387 17H8.80645C9.46521 17 10.097 16.7474 10.5628 16.2978C11.0286 15.8482 11.2903 15.2384 11.2903 14.6026V13.5128H11.5161C12.1749 13.5128 12.8067 13.2602 13.2725 12.8106C13.7383 12.361 14 11.7512 14 11.1154V5.44872C13.9929 5.28447 13.9219 5.12884 13.8013 5.01282ZM9.03226 2.23179L11.6877 4.79487H9.03226V2.23179ZM9.93548 14.6026C9.93548 14.8916 9.81653 15.1688 9.6048 15.3731C9.39306 15.5775 9.10589 15.6923 8.80645 15.6923H2.48387C2.18443 15.6923 1.89726 15.5775 1.68552 15.3731C1.47379 15.1688 1.35484 14.8916 1.35484 14.6026V5.88462C1.35484 5.5956 1.47379 5.31842 1.68552 5.11405C1.89726 4.90968 2.18443 4.79487 2.48387 4.79487H3.6129V11.1154C3.6129 11.7512 3.8746 12.361 4.34041 12.8106C4.80623 13.2602 5.43801 13.5128 6.09677 13.5128H9.93548V14.6026ZM11.5161 12.2051H6.09677C5.79734 12.2051 5.51016 12.0903 5.29843 11.886C5.08669 11.6816 4.96774 11.4044 4.96774 11.1154V2.39744C4.96774 2.10842 5.08669 1.83124 5.29843 1.62687C5.51016 1.4225 5.79734 1.30769 6.09677 1.30769H7.67742V5.44872C7.67976 5.62143 7.75188 5.78643 7.87842 5.90856C8.00496 6.03069 8.1759 6.10031 8.35484 6.10256H12.6452V11.1154C12.6452 11.4044 12.5262 11.6816 12.3145 11.886C12.1027 12.0903 11.8156 12.2051 11.5161 12.2051Z" fill="#949494"/>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="15" viewBox="0 0 16 15" fill="none">
<path d="M10.0588 2.74568L12.5294 5.15732M8.41176 14H15M1.82353 10.7845L1 14L4.29412 13.1961L13.8355 3.88237C14.1443 3.58087 14.3178 3.172 14.3178 2.74568C14.3178 2.31936 14.1443 1.9105 13.8355 1.609L13.6939 1.47073C13.385 1.16932 12.9662 1 12.5294 1C12.0927 1 11.6738 1.16932 11.3649 1.47073L1.82353 10.7845Z" stroke="#747474" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 500 B

View File

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="15" viewBox="0 0 12 15" fill="none">
<path d="M0.857143 13.3333C0.857143 13.7754 1.03775 14.1993 1.35925 14.5118C1.68074 14.8244 2.11677 15 2.57143 15H9.42857C9.88323 15 10.3193 14.8244 10.6408 14.5118C10.9622 14.1993 11.1429 13.7754 11.1429 13.3333V3.33333H0.857143V13.3333ZM2.57143 5H9.42857V13.3333H2.57143V5ZM9 0.833333L8.14286 0H3.85714L3 0.833333H0V2.5H12V0.833333H9Z" fill="#747474"/>
</svg>

After

Width:  |  Height:  |  Size: 459 B

View File

@ -29,14 +29,17 @@ export default function Conversation() {
scrollIntoView();
}, [queries.length, queries[queries.length - 1]]);
useEffect(() => {
const element = document.getElementById('inputbox') as HTMLInputElement;
if (element) {
element.focus();
}
}, []);
useEffect(() => {
const observerCallback: IntersectionObserverCallback = (entries) => {
entries.forEach((entry) => {
if (entry.isIntersecting) {
setHasScrolledToLast(true);
} else {
setHasScrolledToLast(false);
}
setHasScrolledToLast(entry.isIntersecting);
});
};
@ -81,7 +84,7 @@ export default function Conversation() {
responseView = (
<ConversationBubble
ref={endMessageRef}
className={`${index === queries.length - 1 ? 'mb-24' : 'mb-7'}`}
className={`${index === queries.length - 1 ? 'mb-32' : 'mb-7'}`}
key={`${index}ERROR`}
message={query.error}
type="ERROR"
@ -91,7 +94,7 @@ export default function Conversation() {
responseView = (
<ConversationBubble
ref={endMessageRef}
className={`${index === queries.length - 1 ? 'mb-24' : 'mb-7'}`}
className={`${index === queries.length - 1 ? 'mb-32' : 'mb-7'}`}
key={`${index}ANSWER`}
message={query.response}
type={'ANSWER'}
@ -114,7 +117,7 @@ export default function Conversation() {
return (
<div className="flex flex-col justify-center p-4 md:flex-row">
{queries.length > 0 && !hasScrolledToLast ? (
{queries.length > 0 && !hasScrolledToLast && (
<button
onClick={scrollIntoView}
aria-label="scroll to bottom"
@ -126,7 +129,7 @@ export default function Conversation() {
className="h4- w-4 opacity-50 md:h-5 md:w-5"
/>
</button>
) : null}
)}
{queries.length > 0 && (
<div className="mt-20 flex flex-col transition-all md:w-3/4">
@ -134,7 +137,7 @@ export default function Conversation() {
return (
<Fragment key={index}>
<ConversationBubble
className={'mb-7'}
className={'last:mb-27 mb-7'}
key={`${index}QUESTION`}
message={query.prompt}
type="QUESTION"
@ -149,14 +152,16 @@ export default function Conversation() {
{queries.length === 0 && (
<Hero className="mt-24 h-[100vh] md:mt-52"></Hero>
)}
<div className="relative bottom-0 flex w-10/12 flex-col items-end self-center md:fixed md:w-[50%]">
<div className="relative bottom-0 flex w-10/12 flex-col items-end self-center bg-white pt-3 md:fixed md:w-[65%]">
<div className="flex h-full w-full">
<div
id="inputbox"
ref={inputRef}
tabIndex={1}
placeholder="Type your message here..."
contentEditable
onPaste={handlePaste}
className={`border-000000 overflow-x-hidden; max-h-24 min-h-[2.6rem] w-full overflow-y-auto whitespace-pre-wrap rounded-xl border bg-white py-2 pl-4 pr-9 leading-7 opacity-100 focus:outline-none`}
className={`border-000000 overflow-x-hidden; max-h-24 min-h-[2.6rem] w-full overflow-y-auto whitespace-pre-wrap rounded-3xl border bg-white py-2 pl-4 pr-9 text-base leading-7 opacity-100 focus:outline-none`}
onKeyDown={(e) => {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
@ -187,7 +192,7 @@ export default function Conversation() {
</div>
)}
</div>
<p className="w-[100vw] self-center bg-white p-5 text-center text-xs text-gray-2000">
<p className="w-[100vw] self-center bg-white p-5 text-center text-xs text-gray-2000 md:w-full">
This is a chatbot that uses the GPT-3, Faiss and LangChain to answer
questions.
</p>

View File

@ -0,0 +1,11 @@
.list p {
display: inline;
}
.list li:not(:first-child) {
margin-top: 1em;
}
.list li > .list {
margin-top: 1em;
}

View File

@ -1,10 +1,14 @@
import { forwardRef, useState } from 'react';
import Avatar from '../Avatar';
import { FEEDBACK, MESSAGE_TYPE } from './conversationModels';
import classes from './ConversationBubble.module.css';
import Alert from './../assets/alert.svg';
import { ReactComponent as Like } from './../assets/like.svg';
import { ReactComponent as Dislike } from './../assets/dislike.svg';
import { ReactComponent as Copy } from './../assets/copy.svg';
import { ReactComponent as Checkmark } from './../assets/checkmark.svg';
import ReactMarkdown from 'react-markdown';
import copy from 'copy-to-clipboard';
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
import { vscDarkPlus } from 'react-syntax-highlighter/dist/cjs/styles/prism';
@ -24,18 +28,18 @@ const ConversationBubble = forwardRef<
{ message, type, className, feedback, handleFeedback, sources },
ref,
) {
const [showFeedback, setShowFeedback] = useState(false);
const [openSource, setOpenSource] = useState<number | null>(null);
const List = ({
ordered,
children,
}: {
ordered?: boolean;
children: React.ReactNode;
}) => {
const Tag = ordered ? 'ol' : 'ul';
return <Tag className="list-inside list-disc">{children}</Tag>;
const [copied, setCopied] = useState(false);
const handleCopyClick = (text: string) => {
copy(text);
setCopied(true);
// Reset copied to false after a few seconds
setTimeout(() => {
setCopied(false);
}, 2000);
};
let bubble;
if (type === 'QUESTION') {
@ -51,26 +55,21 @@ const ConversationBubble = forwardRef<
);
} else {
bubble = (
<div
ref={ref}
className={`flex self-start ${className} flex-col`}
onMouseEnter={() => setShowFeedback(true)}
onMouseLeave={() => setShowFeedback(false)}
>
<div ref={ref} className={`flex self-start ${className} group flex-col`}>
<div className="flex self-start">
<Avatar className="mt-2 text-2xl" avatar="🦖"></Avatar>
<div
className={`ml-2 mr-5 flex flex-col items-center rounded-3xl bg-gray-1000 p-3.5 ${
className={`ml-2 mr-5 flex flex-col rounded-3xl bg-gray-1000 p-3.5 ${
type === 'ERROR'
? ' rounded-lg border border-red-2000 bg-red-1000 p-2 text-red-3000'
: ''
? 'flex-row rounded-full border border-transparent bg-[#FFE7E7] p-2 py-5 text-sm font-normal text-red-3000 dark:border-red-2000 dark:text-white'
: 'flex-col rounded-3xl'
}`}
>
{type === 'ERROR' && (
<img src={Alert} alt="alert" className="mr-2 inline" />
)}
<ReactMarkdown
className="whitespace-pre-wrap break-words"
className="max-w-screen-md whitespace-pre-wrap break-words"
components={{
code({ node, inline, className, children, ...props }) {
const match = /language-(\w+)/.exec(className || '');
@ -90,29 +89,35 @@ const ConversationBubble = forwardRef<
</code>
);
},
ul({ node, children }) {
return <List>{children}</List>;
ul({ children }) {
return (
<ul
className={`list-inside list-disc whitespace-normal pl-4 ${classes.list}`}
>
{children}
</ul>
);
},
ol({ node, children }) {
return <List ordered>{children}</List>;
ol({ children }) {
return (
<ol
className={`list-inside list-decimal whitespace-normal pl-4 ${classes.list}`}
>
{children}
</ol>
);
},
}}
>
{message}
</ReactMarkdown>
{DisableSourceFE || type === 'ERROR' ? null : (
<span className="mt-3 h-px w-full bg-[#DEDEDE]"></span>
)}
<div className="mt-3 flex w-full flex-row flex-wrap items-center justify-start gap-2">
{DisableSourceFE || type === 'ERROR' ? null : (
<div className="py-1 px-2 text-base font-semibold">
Sources:
</div>
)}
<div className="flex flex-row flex-wrap items-center justify-start gap-2">
{DisableSourceFE
? null
: sources?.map((source, index) => (
<>
<span className="mt-3 h-px w-full bg-[#DEDEDE]"></span>
<div className="mt-3 flex w-full flex-row flex-wrap items-center justify-start gap-2">
<div className="py-1 text-base font-semibold">Sources:</div>
<div className="flex flex-row flex-wrap items-center justify-start gap-2">
{sources?.map((source, index) => (
<div
key={index}
className={`max-w-fit cursor-pointer rounded-[28px] py-1 px-4 ${
@ -135,18 +140,36 @@ const ConversationBubble = forwardRef<
</p>
</div>
))}
</div>
</div>
</div>
</div>
</>
)}
</div>
<div
className={`mr-2 flex items-center justify-center ${
feedback === 'LIKE' || (type !== 'ERROR' && showFeedback)
? ''
: 'md:invisible'
className={`relative mr-2 flex items-center justify-center md:invisible ${
type !== 'ERROR' ? 'group-hover:md:visible' : ''
}`}
>
{copied ? (
<Checkmark className="absolute left-2 top-4" />
) : (
<Copy
className={`absolute left-2 top-4 cursor-pointer fill-gray-4000 hover:stroke-gray-4000`}
onClick={() => {
handleCopyClick(message);
}}
></Copy>
)}
</div>
<div
className={`relative mr-2 flex items-center justify-center md:invisible ${
feedback === 'LIKE' || type !== 'ERROR'
? 'group-hover:md:visible'
: ''
}`}
>
<Like
className={`cursor-pointer ${
className={`absolute left-6 top-4 cursor-pointer ${
feedback === 'LIKE'
? 'fill-purple-30 stroke-purple-30'
: 'fill-none stroke-gray-4000 hover:fill-gray-4000'
@ -155,14 +178,14 @@ const ConversationBubble = forwardRef<
></Like>
</div>
<div
className={`mr-10 flex items-center justify-center ${
feedback === 'DISLIKE' || (type !== 'ERROR' && showFeedback)
? ''
: 'md:invisible'
className={`relative mr-10 flex items-center justify-center md:invisible ${
feedback === 'DISLIKE' || type !== 'ERROR'
? 'group-hover:md:visible'
: ''
}`}
>
<Dislike
className={`cursor-pointer ${
className={`absolute left-10 top-4 cursor-pointer ${
feedback === 'DISLIKE'
? 'fill-red-2000 stroke-red-2000'
: 'fill-none stroke-gray-4000 hover:fill-gray-4000'

View File

@ -0,0 +1,128 @@
import { useEffect, useRef, useState } from 'react';
import { useSelector } from 'react-redux';
import Edit from '../assets/edit.svg';
import Exit from '../assets/exit.svg';
import Message from '../assets/message.svg';
import CheckMark from '../assets/checkmark.svg';
import Trash from '../assets/trash.svg';
import { selectConversationId } from '../preferences/preferenceSlice';
import { useOutsideAlerter } from '../hooks';
interface ConversationProps {
name: string;
id: string;
}
interface ConversationTileProps {
conversation: ConversationProps;
selectConversation: (arg1: string) => void;
onDeleteConversation: (arg1: string) => void;
onSave: ({ name, id }: ConversationProps) => void;
}
export default function ConversationTile({
conversation,
selectConversation,
onDeleteConversation,
onSave,
}: ConversationTileProps) {
const conversationId = useSelector(selectConversationId);
const tileRef = useRef<HTMLInputElement>(null);
const [isEdit, setIsEdit] = useState(false);
const [conversationName, setConversationsName] = useState('');
useOutsideAlerter(
tileRef,
() =>
handleSaveConversation({
id: conversationId || conversation.id,
name: conversationName,
}),
[conversationName],
);
useEffect(() => {
setConversationsName(conversation.name);
}, [conversation.name]);
function handleEditConversation() {
setIsEdit(true);
}
function handleSaveConversation(changedConversation: ConversationProps) {
if (changedConversation.name.trim().length) {
onSave(changedConversation);
setIsEdit(false);
} else {
onClear();
}
}
function onClear() {
setConversationsName(conversation.name);
setIsEdit(false);
}
return (
<div
ref={tileRef}
onClick={() => {
selectConversation(conversation.id);
}}
className={`my-auto mx-4 mt-4 flex h-9 cursor-pointer items-center justify-between gap-4 rounded-3xl hover:bg-gray-100 ${
conversationId === conversation.id ? 'bg-gray-100' : ''
}`}
>
<div
className={`flex ${
conversationId === conversation.id ? 'w-[75%]' : 'w-[95%]'
} gap-4`}
>
<img src={Message} className="ml-4 w-5"></img>
{isEdit ? (
<input
autoFocus
type="text"
className="h-6 w-full px-1 text-sm font-normal leading-6 outline-[#0075FF] focus:outline-1"
value={conversationName}
onChange={(e) => setConversationsName(e.target.value)}
/>
) : (
<p className="my-auto overflow-hidden overflow-ellipsis whitespace-nowrap text-sm font-normal leading-6 text-eerie-black">
{conversationName}
</p>
)}
</div>
{conversationId === conversation.id && (
<div className="flex">
<img
src={isEdit ? CheckMark : Edit}
alt="Edit"
className="mr-2 h-4 w-4 cursor-pointer hover:opacity-50"
id={`img-${conversation.id}`}
onClick={(event) => {
event.stopPropagation();
isEdit
? handleSaveConversation({
id: conversationId,
name: conversationName,
})
: handleEditConversation();
}}
/>
<img
src={isEdit ? Exit : Trash}
alt="Exit"
className={`mr-4 ${
isEdit ? 'h-3 w-3' : 'h-4 w-4'
}mt-px cursor-pointer hover:opacity-50`}
id={`img-${conversation.id}`}
onClick={(event) => {
event.stopPropagation();
isEdit ? onClear() : onDeleteConversation(conversation.id);
}}
/>
</div>
)}
</div>
);
}

View File

@ -55,9 +55,8 @@ export default function Upload({
setProgress(undefined);
setModalState('INACTIVE');
}}
className={`rounded-3xl bg-purple-30 px-4 py-2 text-sm font-medium text-white ${
isCancellable ? '' : 'hidden'
}`}
className={`rounded-3xl bg-purple-30 px-4 py-2 text-sm font-medium text-white ${isCancellable ? '' : 'hidden'
}`}
>
Finish
</button>
@ -206,7 +205,9 @@ export default function Upload({
<div className="flex flex-row-reverse">
<button
onClick={uploadFile}
className="ml-6 rounded-3xl bg-purple-30 py-2 px-6 text-white"
className={`ml-6 rounded-3xl bg-purple-30 text-white ${files.length > 0 ? '' : 'bg-opacity-75 text-opacity-80'
} py-2 px-6`}
disabled={files.length === 0} // Disable the button if no file is selected
>
Train
</button>
@ -227,9 +228,8 @@ export default function Upload({
return (
<article
className={`${
modalState === 'ACTIVE' ? 'visible' : 'hidden'
} absolute z-30 h-screen w-screen bg-gray-alpha`}
className={`${modalState === 'ACTIVE' ? 'visible' : 'hidden'
} absolute z-30 h-screen w-screen bg-gray-alpha`}
>
<article className="mx-auto mt-24 flex w-[90vw] max-w-lg flex-col gap-4 rounded-lg bg-white p-6 shadow-lg">
{view}

View File

@ -1,6 +1,7 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
content: ['./index.html', './src/**/*.{js,ts,jsx,tsx}'],
darkMode: 'class',
theme: {
extend: {
spacing: {

View File

@ -78,14 +78,12 @@ def ingest(yes: bool = typer.Option(False, "-y", "--yes", prompt=False,
# Here we check for command line arguments for bot calls.
# If no argument exists or the yes is not True, then the
# user permission is requested to call the API.
if len(sys.argv) > 1:
if yes:
call_openai_api(docs, folder_name)
else:
get_user_permission(docs, folder_name)
if len(sys.argv) > 1 and yes:
call_openai_api(docs, folder_name)
else:
get_user_permission(docs, folder_name)
folder_counts = defaultdict(int)
folder_names = []
for dir_path in dir:
@ -110,14 +108,19 @@ def convert(dir: Optional[str] = typer.Option("inputs",
Creates documentation linked to original functions from specified location.
By default /inputs folder is used, .py is parsed.
"""
if formats == 'py':
functions_dict, classes_dict = extract_py(dir)
elif formats == 'js':
functions_dict, classes_dict = extract_js(dir)
elif formats == 'java':
functions_dict, classes_dict = extract_java(dir)
# Using a dictionary to map between the formats and their respective extraction functions
# makes the code more scalable. When adding more formats in the future,
# you only need to update the extraction_functions dictionary.
extraction_functions = {
'py': extract_py,
'js': extract_js,
'java': extract_java
}
if formats in extraction_functions:
functions_dict, classes_dict = extraction_functions[formats](dir)
else:
raise Exception("Sorry, language not supported yet")
raise Exception("Sorry, language not supported yet")
transform_to_docs(functions_dict, classes_dict, formats, dir)

View File

@ -47,7 +47,7 @@ javalang==0.13.0
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.3.1
langchain==0.0.308
langchain==0.0.312
lxml==4.9.3
manifest-ml==0.1.8
MarkupSafe==2.1.3

View File

@ -0,0 +1,19 @@
"""
Tests regarding the vector store class, including checking
compatibility between different transformers and local vector
stores (index.faiss)
"""
import pytest
from application.vectorstore.faiss import FaissStore
from application.core.settings import settings
def test_init_local_faiss_store_huggingface():
"""
Test that asserts that trying to initialize a FaissStore with
the huggingface sentence transformer below together with the
index.faiss file in the application/ folder results in a
dimension mismatch error.
"""
settings.EMBEDDINGS_NAME = "huggingface_sentence-transformers/all-mpnet-base-v2"
with pytest.raises(ValueError):
FaissStore("application/", "", None)