Merge branch 'arc53:main' into main
23
.github/labeler.yml
vendored
Normal file
@ -0,0 +1,23 @@
|
||||
repo:
|
||||
- '*'
|
||||
|
||||
github:
|
||||
- .github/**/*
|
||||
|
||||
application:
|
||||
- application/**/*
|
||||
|
||||
docs:
|
||||
- docs/**/*
|
||||
|
||||
extensions:
|
||||
- extensions/**/*
|
||||
|
||||
frontend:
|
||||
- frontend/**/*
|
||||
|
||||
scripts:
|
||||
- scripts/**/*
|
||||
|
||||
tests:
|
||||
- tests/**/*
|
15
.github/workflows/labeler.yml
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
# https://github.com/actions/labeler
|
||||
name: Pull Request Labeler
|
||||
on:
|
||||
- pull_request_target
|
||||
jobs:
|
||||
triage:
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/labeler@v4
|
||||
with:
|
||||
repo-token: "${{ secrets.GITHUB_TOKEN }}"
|
||||
sync-labels: true
|
@ -1,41 +1,45 @@
|
||||
# Welcome to DocsGPT Contributing Guidelines
|
||||
|
||||
Thank you for choosing this project to contribute to. We are all very grateful!
|
||||
Thank you for choosing to contribute to DocsGPT! We are all very grateful!
|
||||
|
||||
### [🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
|
||||
|
||||
# We accept different types of contributions
|
||||
|
||||
📣 Discussions - where you can start a new topic or answer some questions
|
||||
📣 **Discussions** - Engage in conversations, start new topics, or help answer questions.
|
||||
|
||||
🐞 Issues - This is how we track tasks, sometimes it is bugs that need fixing, and sometimes it is new features
|
||||
🐞 **Issues** - This is where we keep track of tasks. It could be bugs,fixes or suggestions for new features.
|
||||
|
||||
🛠️ Pull requests - This is how you can suggest changes to our repository, to work on existing issues or add new features
|
||||
🛠️ **Pull requests** - Suggest changes to our repository, either by working on existing issues or adding new features.
|
||||
|
||||
📚 Wiki - where we have our documentation
|
||||
📚 **Wiki** - This is where our documentation resides.
|
||||
|
||||
|
||||
## 🐞 Issues and Pull requests
|
||||
|
||||
We value contributions to our issues in the form of discussion or suggestions. We recommend that you check out existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2).
|
||||
We value contributions in the form of discussions or suggestions. We recommend taking a look at existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2).
|
||||
|
||||
If you want to contribute by writing code, there are a few things that you should know before doing it:
|
||||
Before creating issues, please check out how the latest version of our app looks and works by launching it via [Quickstart](https://github.com/arc53/DocsGPT#quickstart) the version on our live demo is slightly modified with login. Your issues should relate to the version that you can launch via [Quickstart](https://github.com/arc53/DocsGPT#quickstart).
|
||||
|
||||
If you're interested in contributing code, here are some important things to know:
|
||||
|
||||
We have a frontend built with React (Vite) and a backend in Python.
|
||||
|
||||
We have a frontend in React (Vite) and backend in Python.
|
||||
|
||||
### If you are looking to contribute to frontend (⚛️React, Vite):
|
||||
|
||||
- The current frontend is being migrated from `/application` to `/frontend` with a new design, so please contribute to the new one.
|
||||
- The current frontend is being migrated from [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) to [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) with a new design, so please contribute to the new one.
|
||||
- Check out this [milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues.
|
||||
- The Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
|
||||
|
||||
Please try to follow the guidelines.
|
||||
|
||||
### If you are looking to contribute to Backend (🐍 Python):
|
||||
- Check out our issues and contribute to `/application` or `/scripts` (ignore old `ingest_rst.py` `ingest_rst_sphinx.py` files; they will be deprecated soon).
|
||||
- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
|
||||
- Before submitting your PR, ensure it is queryable after ingesting some test data.
|
||||
|
||||
- Review our issues and contribute to [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts) (please disregard old [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py) files; they will be deprecated soon).
|
||||
- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
|
||||
- Before submitting your Pull Request, ensure it can be queried after ingesting some test data.
|
||||
|
||||
### Testing
|
||||
|
||||
To run unit tests from the root of the repository, execute:
|
||||
@ -43,10 +47,11 @@ To run unit tests from the root of the repository, execute:
|
||||
python -m pytest
|
||||
```
|
||||
|
||||
### Workflow:
|
||||
Create a fork, make changes on your forked repository, and submit changes as a pull request.
|
||||
### Workflow 📈 :
|
||||
- Fork repository
|
||||
- Make the required changes on your forked version
|
||||
- Commit those changes and submit those as a pull request so that it reflects on thr main repository.
|
||||
|
||||
## Questions/collaboration
|
||||
Please join our [Discord](https://discord.gg/n5BX8dh8rU). Don't hesitate; we are very friendly and welcoming to new contributors.
|
||||
|
||||
Feel free to join our [Discord](https://discord.gg/n5BX8dh8rU). We're very friendly and welcoming to new contributors, so don't hesitate to reach out.
|
||||
# Thank you so much for considering contributing to DocsGPT!🙏
|
||||
|
@ -17,14 +17,14 @@ Familiarize yourself with the current contributions and our [Roadmap](https://gi
|
||||
Deciding to contribute with code? Here are some insights based on the area of your interest:
|
||||
|
||||
- Frontend (⚛️React, Vite):
|
||||
- Most of the code is located in `/frontend` folder. You can also check out our React extension in /extensions/react-widget.
|
||||
- Most of the code is located in [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) folder. You can also check out our React extension in [`/extensions/react-widget`](https://github.com/arc53/DocsGPT/tree/main/extensions/react-widget).
|
||||
- For design references, here's the [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
|
||||
- Ensure you adhere to the established guidelines.
|
||||
|
||||
- Backend (🐍Python):
|
||||
- Focus on `/application` or `/scripts`. However, avoid the files ingest_rst.py and ingest_rst_sphinx.py, as they will soon be deprecated.
|
||||
- Focus on [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts). However, avoid the files [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) and [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py), as they will soon be deprecated.
|
||||
- Newly added code should come with relevant unit tests (pytest).
|
||||
- Refer to the `/tests` folder for test suites.
|
||||
- Refer to the [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder for test suites.
|
||||
|
||||
Check out our [Contributing Guidelines](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
|
||||
|
||||
|
66
README.md
@ -7,9 +7,9 @@
|
||||
</p>
|
||||
|
||||
<p align="left">
|
||||
<strong>DocsGPT</strong> is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
|
||||
<strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
|
||||
|
||||
Say goodbye to time-consuming manual searches, and let <strong>DocsGPT</strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.
|
||||
Say goodbye to time-consuming manual searches, and let <strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.
|
||||
</p>
|
||||
|
||||
<div align="center">
|
||||
@ -21,10 +21,10 @@ Say goodbye to time-consuming manual searches, and let <strong>DocsGPT</strong>
|
||||
|
||||
</div>
|
||||
|
||||
### Production Support/ Help for companies:
|
||||
### Production Support / Help for companies:
|
||||
|
||||
We're eager to provide personalized assistance when deploying your DocsGPT to a live environment.
|
||||
- [Schedule Demo 👋](https://cal.com/arc53/docsgpt-demo-b2b?date=2023-10-04&month=2023-10)
|
||||
- [Book Demo 👋](https://cal.com/arc53/docsgpt-demo-b2b)
|
||||
- [Send Email ✉️](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions)
|
||||
|
||||
### [🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
|
||||
@ -54,17 +54,20 @@ If you don't have enough resources to run it, you can use bitsnbytes to quantize
|
||||
|
||||
|
||||
## Useful links
|
||||
[Live preview](https://docsgpt.arc53.com/)
|
||||
|
||||
- 🔍🔥 [Live preview](https://docsgpt.arc53.com/)
|
||||
|
||||
[Join our Discord](https://discord.gg/n5BX8dh8rU)
|
||||
- 💬🎉[Join our Discord](https://discord.gg/n5BX8dh8rU)
|
||||
|
||||
[Guides](https://docs.docsgpt.co.uk/)
|
||||
- 📚😎 [Guides](https://docs.docsgpt.co.uk/)
|
||||
|
||||
[Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
|
||||
- 👩💻👨💻 [Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
|
||||
|
||||
- 🗂️🚀 [How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation)
|
||||
|
||||
- 🏠🔐 [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM)
|
||||
|
||||
[How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation)
|
||||
|
||||
[How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM)
|
||||
|
||||
|
||||
## Project structure
|
||||
@ -89,15 +92,15 @@ It will install all the dependencies and allow you to download the local model o
|
||||
Otherwise, refer to this Guide:
|
||||
|
||||
1. Download and open this repository with `git clone https://github.com/arc53/DocsGPT.git`
|
||||
2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your OpenAI API key and `VITE_API_STREAMING` to true or false, depending on if you want streaming answers or not.
|
||||
2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys) and `VITE_API_STREAMING` to true or false, depending on if you want streaming answers or not.
|
||||
It should look like this inside:
|
||||
|
||||
```
|
||||
API_KEY=Yourkey
|
||||
VITE_API_STREAMING=true
|
||||
```
|
||||
See optional environment variables in the `/.env-template` and `/application/.env_sample` files.
|
||||
3. Run `./run-with-docker-compose.sh`.
|
||||
See optional environment variables in the [/.env-template](https://github.com/arc53/DocsGPT/blob/main/.env-template) and [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) files.
|
||||
3. Run [./run-with-docker-compose.sh](https://github.com/arc53/DocsGPT/blob/main/run-with-docker-compose.sh).
|
||||
4. Navigate to http://localhost:5173/.
|
||||
|
||||
To stop, just run `Ctrl + C`.
|
||||
@ -105,7 +108,7 @@ To stop, just run `Ctrl + C`.
|
||||
## Development environments
|
||||
|
||||
### Spin up mongo and redis
|
||||
For development, only two containers are used from `docker-compose.yaml` (by deleting all services except for Redis and Mongo).
|
||||
For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo).
|
||||
See file [docker-compose-dev.yaml](./docker-compose-dev.yaml).
|
||||
|
||||
Run
|
||||
@ -119,18 +122,27 @@ docker compose -f docker-compose-dev.yaml up -d
|
||||
Make sure you have Python 3.10 or 3.11 installed.
|
||||
|
||||
1. Export required environment variables or prepare a `.env` file in the `/application` folder:
|
||||
- Copy `.env_sample` and create `.env` with your OpenAI API token for the `API_KEY` and `EMBEDDINGS_KEY` fields.
|
||||
- Copy [.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) and create `.env` with your OpenAI API token for the `API_KEY` and `EMBEDDINGS_KEY` fields.
|
||||
|
||||
(check out [`application/core/settings.py`](application/core/settings.py) if you want to see more config options.)
|
||||
|
||||
2. (optional) Create a Python virtual environment:
|
||||
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments .
|
||||
|
||||
a) On Mac OS and Linux
|
||||
```commandline
|
||||
python -m venv venv
|
||||
. venv/bin/activate
|
||||
```
|
||||
3. Change to the `application/` subdir and install dependencies for the backend:
|
||||
b) On Windows
|
||||
```commandline
|
||||
pip install -r application/requirements.txt
|
||||
python -m venv venv
|
||||
venv/Scripts/activate
|
||||
```
|
||||
|
||||
3. Change to the `application/` subdir by the command `cd application/` and install dependencies for the backend:
|
||||
```commandline
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
4. Run the app using `flask run --host=0.0.0.0 --port=7091`.
|
||||
5. Start worker with `celery -A application.app.celery worker -l INFO`.
|
||||
@ -139,9 +151,21 @@ pip install -r application/requirements.txt
|
||||
|
||||
Make sure you have Node version 16 or higher.
|
||||
|
||||
1. Navigate to the `/frontend` folder.
|
||||
2. Install dependencies by running `npm install`.
|
||||
3. Run the app using `npm run dev`.
|
||||
1. Navigate to the [/frontend](https://github.com/arc53/DocsGPT/tree/main/frontend) folder.
|
||||
2. Install required packages `husky` and `vite` (ignore if installed).
|
||||
```commandline
|
||||
npm install husky -g
|
||||
npm install vite -g
|
||||
```
|
||||
3. Install dependencies by running `npm install --include=dev`.
|
||||
4. Run the app using `npm run dev`.
|
||||
|
||||
|
||||
## Contributing
|
||||
Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information about how to get involved. We welcome issues, questions, and pull requests.
|
||||
|
||||
## Code Of Conduct
|
||||
We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more information about contributing.
|
||||
|
||||
## Many Thanks To Our Contributors
|
||||
|
||||
@ -149,5 +173,7 @@ Make sure you have Node version 16 or higher.
|
||||
<img src="https://contrib.rocks/image?repo=arc53/DocsGPT" />
|
||||
</a>
|
||||
|
||||
## License
|
||||
The source code license is [MIT](https://opensource.org/license/mit/), as described in the [LICENSE](LICENSE) file.
|
||||
|
||||
Built with [🦜️🔗 LangChain](https://github.com/hwchase17/langchain)
|
||||
|
@ -53,6 +53,15 @@ def get_single_conversation():
|
||||
conversation = conversations_collection.find_one({"_id": ObjectId(conversation_id)})
|
||||
return jsonify(conversation['queries'])
|
||||
|
||||
@user.route("/api/update_conversation_name", methods=["POST"])
|
||||
def update_conversation_name():
|
||||
# update data for a conversation
|
||||
data = request.get_json()
|
||||
id = data["id"]
|
||||
name = data["name"]
|
||||
conversations_collection.update_one({"_id": ObjectId(id)},{"$set":{"name":name}})
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@user.route("/api/feedback", methods=["POST"])
|
||||
def api_feedback():
|
||||
|
@ -27,7 +27,10 @@ celery.config_from_object("application.celeryconfig")
|
||||
|
||||
@app.route("/")
|
||||
def home():
|
||||
return redirect('http://localhost:5173') if request.remote_addr in ('0.0.0.0', '127.0.0.1', 'localhost', '172.18.0.1') else 'Welcome to DocsGPT Backend!'
|
||||
if request.remote_addr in ('0.0.0.0', '127.0.0.1', 'localhost', '172.18.0.1'):
|
||||
return redirect('http://localhost:5173')
|
||||
else:
|
||||
return 'Welcome to DocsGPT Backend!'
|
||||
|
||||
@app.after_request
|
||||
def after_request(response):
|
||||
|
@ -32,6 +32,12 @@ class Settings(BaseSettings):
|
||||
ELASTIC_URL: str = None # url for elasticsearch
|
||||
ELASTIC_INDEX: str = "docsgpt" # index name for elasticsearch
|
||||
|
||||
# SageMaker config
|
||||
SAGEMAKER_ENDPOINT: str = None # SageMaker endpoint name
|
||||
SAGEMAKER_REGION: str = None # SageMaker region name
|
||||
SAGEMAKER_ACCESS_KEY: str = None # SageMaker access key
|
||||
SAGEMAKER_SECRET_KEY: str = None # SageMaker secret key
|
||||
|
||||
|
||||
path = Path(__file__).parent.parent.absolute()
|
||||
settings = Settings(_env_file=path.joinpath(".env"), _env_file_encoding="utf-8")
|
||||
|
@ -1,27 +1,139 @@
|
||||
from application.llm.base import BaseLLM
|
||||
from application.core.settings import settings
|
||||
import requests
|
||||
import json
|
||||
import io
|
||||
|
||||
|
||||
|
||||
class LineIterator:
|
||||
"""
|
||||
A helper class for parsing the byte stream input.
|
||||
|
||||
The output of the model will be in the following format:
|
||||
```
|
||||
b'{"outputs": [" a"]}\n'
|
||||
b'{"outputs": [" challenging"]}\n'
|
||||
b'{"outputs": [" problem"]}\n'
|
||||
...
|
||||
```
|
||||
|
||||
While usually each PayloadPart event from the event stream will contain a byte array
|
||||
with a full json, this is not guaranteed and some of the json objects may be split across
|
||||
PayloadPart events. For example:
|
||||
```
|
||||
{'PayloadPart': {'Bytes': b'{"outputs": '}}
|
||||
{'PayloadPart': {'Bytes': b'[" problem"]}\n'}}
|
||||
```
|
||||
|
||||
This class accounts for this by concatenating bytes written via the 'write' function
|
||||
and then exposing a method which will return lines (ending with a '\n' character) within
|
||||
the buffer via the 'scan_lines' function. It maintains the position of the last read
|
||||
position to ensure that previous bytes are not exposed again.
|
||||
"""
|
||||
|
||||
def __init__(self, stream):
|
||||
self.byte_iterator = iter(stream)
|
||||
self.buffer = io.BytesIO()
|
||||
self.read_pos = 0
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __next__(self):
|
||||
while True:
|
||||
self.buffer.seek(self.read_pos)
|
||||
line = self.buffer.readline()
|
||||
if line and line[-1] == ord('\n'):
|
||||
self.read_pos += len(line)
|
||||
return line[:-1]
|
||||
try:
|
||||
chunk = next(self.byte_iterator)
|
||||
except StopIteration:
|
||||
if self.read_pos < self.buffer.getbuffer().nbytes:
|
||||
continue
|
||||
raise
|
||||
if 'PayloadPart' not in chunk:
|
||||
print('Unknown event type:' + chunk)
|
||||
continue
|
||||
self.buffer.seek(0, io.SEEK_END)
|
||||
self.buffer.write(chunk['PayloadPart']['Bytes'])
|
||||
|
||||
class SagemakerAPILLM(BaseLLM):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.url = settings.SAGEMAKER_API_URL
|
||||
import boto3
|
||||
runtime = boto3.client(
|
||||
'runtime.sagemaker',
|
||||
aws_access_key_id='xxx',
|
||||
aws_secret_access_key='xxx',
|
||||
region_name='us-west-2'
|
||||
)
|
||||
|
||||
|
||||
self.endpoint = settings.SAGEMAKER_ENDPOINT
|
||||
self.runtime = runtime
|
||||
|
||||
|
||||
def gen(self, model, engine, messages, stream=False, **kwargs):
|
||||
context = messages[0]['content']
|
||||
user_question = messages[-1]['content']
|
||||
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
|
||||
|
||||
|
||||
response = requests.post(
|
||||
url=self.url,
|
||||
headers={
|
||||
"Content-Type": "application/json; charset=utf-8",
|
||||
},
|
||||
data=json.dumps({"input": prompt})
|
||||
)
|
||||
# Construct payload for endpoint
|
||||
payload = {
|
||||
"inputs": prompt,
|
||||
"stream": False,
|
||||
"parameters": {
|
||||
"do_sample": True,
|
||||
"temperature": 0.1,
|
||||
"max_new_tokens": 30,
|
||||
"repetition_penalty": 1.03,
|
||||
"stop": ["</s>", "###"]
|
||||
}
|
||||
}
|
||||
body_bytes = json.dumps(payload).encode('utf-8')
|
||||
|
||||
return response.json()['answer']
|
||||
# Invoke the endpoint
|
||||
response = self.runtime.invoke_endpoint(EndpointName=self.endpoint,
|
||||
ContentType='application/json',
|
||||
Body=body_bytes)
|
||||
result = json.loads(response['Body'].read().decode())
|
||||
import sys
|
||||
print(result[0]['generated_text'], file=sys.stderr)
|
||||
return result[0]['generated_text'][len(prompt):]
|
||||
|
||||
def gen_stream(self, model, engine, messages, stream=True, **kwargs):
|
||||
raise NotImplementedError("Sagemaker does not support streaming")
|
||||
context = messages[0]['content']
|
||||
user_question = messages[-1]['content']
|
||||
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
|
||||
|
||||
|
||||
# Construct payload for endpoint
|
||||
payload = {
|
||||
"inputs": prompt,
|
||||
"stream": True,
|
||||
"parameters": {
|
||||
"do_sample": True,
|
||||
"temperature": 0.1,
|
||||
"max_new_tokens": 512,
|
||||
"repetition_penalty": 1.03,
|
||||
"stop": ["</s>", "###"]
|
||||
}
|
||||
}
|
||||
body_bytes = json.dumps(payload).encode('utf-8')
|
||||
|
||||
# Invoke the endpoint
|
||||
response = self.runtime.invoke_endpoint_with_response_stream(EndpointName=self.endpoint,
|
||||
ContentType='application/json',
|
||||
Body=body_bytes)
|
||||
#result = json.loads(response['Body'].read().decode())
|
||||
event_stream = response['Body']
|
||||
start_json = b'{'
|
||||
for line in LineIterator(event_stream):
|
||||
if line != b'' and start_json in line:
|
||||
#print(line)
|
||||
data = json.loads(line[line.find(start_json):].decode('utf-8'))
|
||||
if data['token']['text'] not in ["</s>", "###"]:
|
||||
print(data['token']['text'],end='')
|
||||
yield data['token']['text']
|
@ -57,7 +57,7 @@ class HTMLParser(BaseParser):
|
||||
title_indexes = [i for i, isd_el in enumerate(isd) if isd_el['type'] == 'Title']
|
||||
|
||||
# Creating 'Chunks' - List of lists of strings
|
||||
# each list starting with with isd_el['type'] = 'Title' and all the data till the next 'Title'
|
||||
# each list starting with isd_el['type'] = 'Title' and all the data till the next 'Title'
|
||||
# Each Chunk can be thought of as an individual set of data, which can be sent to the model
|
||||
# Where Each Title is grouped together with the data under it
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
from application.vectorstore.base import BaseVectorStore
|
||||
from langchain import FAISS
|
||||
from langchain.vectorstores import FAISS
|
||||
from application.core.settings import settings
|
||||
|
||||
class FaissStore(BaseVectorStore):
|
||||
|
@ -21,8 +21,7 @@ except FileExistsError:
|
||||
|
||||
|
||||
def metadata_from_filename(title):
|
||||
store = title.split('/')
|
||||
store = store[1] + '/' + store[2]
|
||||
store = '/'.join(title.split('/')[1:3])
|
||||
return {'title': title, 'store': store}
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@ After that, it is time to pick your Instance Image. We recommend using "Linux/Un
|
||||
|
||||
As for instance plan, it'll vary depending on your unique demands, but a "1 GB, 1vCPU, 40GB SSD and 2TB transfer" setup should cover most scenarios.
|
||||
|
||||
Lastly, Identify your instance by giving it a unique name and then hit "Create instance".
|
||||
Lastly, identify your instance by giving it a unique name and then hit "Create instance".
|
||||
|
||||
PS: Once you create your instance, it'll likely take a few minutes for the setup to be completed.
|
||||
|
||||
@ -42,7 +42,7 @@ A terminal window will pop up, and the first step will be to clone the DocsGPT g
|
||||
|
||||
#### Download the package information
|
||||
|
||||
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so simply enter the following command:
|
||||
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
|
||||
|
||||
`sudo apt update`
|
||||
|
||||
@ -64,7 +64,7 @@ Enter the following command to access the folder in which DocsGPT docker-compose
|
||||
|
||||
#### Prepare the environment
|
||||
|
||||
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
|
||||
Inside the DocsGPT folder, create a `.env` file and copy the contents of `.env_sample` into it.
|
||||
|
||||
`nano .env`
|
||||
|
||||
@ -95,7 +95,7 @@ You're almost there! Now that all the necessary bits and pieces have been instal
|
||||
|
||||
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
|
||||
|
||||
Once this is done you can go ahead and close the terminal window.
|
||||
Once this is done, you can go ahead and close the terminal window.
|
||||
|
||||
#### Enabling ports
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
## Launching Web App
|
||||
Note: Make sure you have Docker installed
|
||||
|
||||
On Mac OS or Linux just write:
|
||||
On macOS or Linux, just write:
|
||||
|
||||
`./setup.sh`
|
||||
|
||||
@ -10,11 +10,11 @@ It will install all the dependencies and give you an option to download the loca
|
||||
Otherwise, refer to this Guide:
|
||||
|
||||
1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`.
|
||||
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI api key](https://platform.openai.com/account/api-keys).
|
||||
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys).
|
||||
3. Run `docker-compose build && docker-compose up`.
|
||||
4. Navigate to `http://localhost:5173/`.
|
||||
|
||||
To stop just run `Ctrl + C`.
|
||||
To stop, just run `Ctrl + C`.
|
||||
|
||||
### Chrome Extension
|
||||
|
||||
|
@ -18,7 +18,7 @@ fetch("http://127.0.0.1:5000/api/answer", {
|
||||
.then(console.log.bind(console))
|
||||
```
|
||||
|
||||
In response you will get a json document like this one:
|
||||
In response, you will get a JSON document like this one:
|
||||
|
||||
```json
|
||||
{
|
||||
@ -30,7 +30,7 @@ In response you will get a json document like this one:
|
||||
|
||||
### /api/docs_check
|
||||
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
|
||||
It's a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example:
|
||||
It's a POST request that sends a JSON in a body with 1 value. Here is a JavaScript fetch example:
|
||||
|
||||
```js
|
||||
// answer (POST http://127.0.0.1:5000/api/docs_check)
|
||||
@ -45,7 +45,7 @@ fetch("http://127.0.0.1:5000/api/docs_check", {
|
||||
.then(console.log.bind(console))
|
||||
```
|
||||
|
||||
In response you will get a json document like this one:
|
||||
In response, you will get a JSON document like this one:
|
||||
```json
|
||||
{
|
||||
"status": "exists"
|
||||
@ -54,17 +54,17 @@ In response you will get a json document like this one:
|
||||
|
||||
|
||||
### /api/combine
|
||||
Provides json that tells UI which vectors are available and where they are located with a simple get request.
|
||||
Provides JSON that tells UI which vectors are available and where they are located with a simple get request.
|
||||
|
||||
Response will include:
|
||||
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.
|
||||
|
||||
Example of json in Docshub and local:
|
||||
Example of JSON in Docshub and local:
|
||||
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
|
||||
|
||||
|
||||
### /api/upload
|
||||
Uploads file that needs to be trained, response is json with task id, which can be used to check on tasks progress
|
||||
Uploads file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress
|
||||
HTML example:
|
||||
|
||||
```html
|
||||
@ -73,7 +73,7 @@ HTML example:
|
||||
<input type="text" name="user" value="local" hidden>
|
||||
<input type="text" name="name" placeholder="Name:">
|
||||
|
||||
<button type="submit" class="py-2 px-4 text-white bg-blue-500 rounded-md hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-blue-500">
|
||||
<button type="submit" class="py-2 px-4 text-white bg-purple-30 rounded-md hover:bg-purple-30 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-purple-30">
|
||||
Upload
|
||||
</button>
|
||||
</form>
|
||||
@ -104,7 +104,7 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
|
||||
|
||||
Responses:
|
||||
There are two types of responses:
|
||||
1. while task it still running, where "current" will show progress from 0 to 100
|
||||
1. While task is still running, where "current" will show progress from 0 to 100:
|
||||
```json
|
||||
{
|
||||
"result": {
|
||||
@ -114,7 +114,7 @@ There are two types of responses:
|
||||
}
|
||||
```
|
||||
|
||||
2. When task is completed
|
||||
2. When task is completed:
|
||||
```json
|
||||
{
|
||||
"result": {
|
||||
@ -133,7 +133,7 @@ There are two types of responses:
|
||||
```
|
||||
|
||||
### /api/delete_old
|
||||
Deletes old vectorstores:
|
||||
Deletes old Vector stores:
|
||||
```js
|
||||
// Task status (GET http://127.0.0.1:5000/api/docs_check)
|
||||
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
|
||||
|
@ -13,7 +13,7 @@ chatwoot_token=<from part 2>
|
||||
|
||||
5. Start with `flask run` command.
|
||||
|
||||
If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation.
|
||||
If you want for bot to stop responding to questions for a specific user or session, just add a label `human-requested` in your conversation.
|
||||
|
||||
|
||||
### Optional (extra validation)
|
||||
@ -26,4 +26,4 @@ account_id=(optional) 1
|
||||
assignee_id=(optional) 1
|
||||
```
|
||||
|
||||
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.
|
||||
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.
|
||||
|
@ -4,7 +4,7 @@
|
||||
Got to your project and install a new dependency: `npm install docsgpt`.
|
||||
|
||||
### Usage
|
||||
Go to your project and in the file where you want to use the widget import it:
|
||||
Go to your project and in the file where you want to use the widget, import it:
|
||||
```js
|
||||
import { DocsGPTWidget } from "docsgpt";
|
||||
import "docsgpt/dist/style.css";
|
||||
@ -14,12 +14,12 @@ import "docsgpt/dist/style.css";
|
||||
Then you can use it like this: `<DocsGPTWidget />`
|
||||
|
||||
DocsGPTWidget takes 3 props:
|
||||
- `apiHost` — url of your DocsGPT API.
|
||||
- `selectDocs` — documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`).
|
||||
- `apiKey` — usually its empty.
|
||||
- `apiHost` — URL of your DocsGPT API.
|
||||
- `selectDocs` — documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`).
|
||||
- `apiKey` — usually it's empty.
|
||||
|
||||
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
|
||||
Install you widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
|
||||
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
|
||||
```js
|
||||
import { DocsGPTWidget } from "docsgpt";
|
||||
import "docsgpt/dist/style.css";
|
||||
|
@ -1,4 +1,4 @@
|
||||
## To customize a main prompt navigate to `/application/prompt/combine_prompt.txt`
|
||||
## To customize a main prompt, navigate to `/application/prompt/combine_prompt.txt`
|
||||
|
||||
You can try editing it to see how the model responses.
|
||||
|
||||
|
@ -5,18 +5,18 @@ This AI can use any documentation, but first it needs to be prepared for similar
|
||||
|
||||
Start by going to `/scripts/` folder.
|
||||
|
||||
If you open this file you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
|
||||
If you open this file, you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
|
||||
|
||||
It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$.
|
||||
It currently uses OPEN_AI to create the vector store, so make sure your documentation is not too big. Pandas cost me around $3-$4.
|
||||
|
||||
You can usually find documentation on github in `docs/` folder for most open-source projects.
|
||||
You can usually find documentation on Github in `docs/` folder for most open-source projects.
|
||||
|
||||
### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
|
||||
Name it `inputs/`
|
||||
Put all your .rst/.md files in there
|
||||
The search is recursive, so you don't need to flatten them
|
||||
- Name it `inputs/`
|
||||
- Put all your .rst/.md files in there
|
||||
- The search is recursive, so you don't need to flatten them
|
||||
|
||||
If there are no .rst/.md files just convert whatever you find to txt and feed it. (don't forget to change the extension in script)
|
||||
If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script)
|
||||
|
||||
### 2. Create .env file in `scripts/` folder
|
||||
And write your OpenAI API key inside
|
||||
@ -32,7 +32,7 @@ It will tell you how much it will cost
|
||||
|
||||
|
||||
### 5. Run web app
|
||||
Once you run it will use new context that is relevant to your documentation
|
||||
Once you run it will use new context that is relevant to your documentation
|
||||
Make sure you select default in the dropdown in the UI
|
||||
|
||||
## Customization
|
||||
@ -41,7 +41,7 @@ You can learn more about options while running ingest.py by running:
|
||||
`python ingest.py --help`
|
||||
| Options | |
|
||||
|:--------------------------------:|:------------------------------------------------------------------------------------------------------------------------------:|
|
||||
| **ingest** | Runs 'ingest' function converting documentation to to Faiss plus Index format |
|
||||
| **ingest** | Runs 'ingest' function, converting documentation to Faiss plus Index format |
|
||||
| --dir TEXT | List of paths to directory for index creation. E.g. --dir inputs --dir inputs2 [default: inputs] |
|
||||
| --file TEXT | File paths to use (Optional; overrides directory) E.g. --files inputs/1.md --files inputs/2.md |
|
||||
| --recursive / --no-recursive | Whether to recursively search in subdirectories [default: recursive] |
|
||||
@ -56,4 +56,4 @@ You can learn more about options while running ingest.py by running:
|
||||
| | |
|
||||
| **convert** | Creates documentation in .md format from source code |
|
||||
| --dir TEXT | Path to a directory with source code. E.g. --dir inputs [default: inputs] |
|
||||
| --formats TEXT | Source code language from which to create documentation. Supports py, js and java. E.g. --formats py [default: py] |
|
||||
| --formats TEXT | Source code language from which to create documentation. Supports py, js and java. E.g. --formats py [default: py] |
|
||||
|
@ -1,4 +1,4 @@
|
||||
Fortunately there are many providers for LLM's and some of them can even be ran locally
|
||||
Fortunately, there are many providers for LLM's and some of them can even be run locally
|
||||
|
||||
There are two models used in the app:
|
||||
1. Embeddings.
|
||||
@ -21,12 +21,16 @@ By default, we use OpenAI's models but if you want to change it or even run it l
|
||||
You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`.
|
||||
|
||||
Options:
|
||||
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon)
|
||||
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon, llama.cpp)
|
||||
EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium)
|
||||
|
||||
If using Llama, set the `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2` and be sure to download [this model](https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf) into the `models/` folder: `https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf`.
|
||||
|
||||
Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choose option 1 when prompted. You do not need to manually add the DocsGPT model mentioned above to your `models/` folder if you use `setup.sh`, as the script will manage that step for you.
|
||||
|
||||
That's it!
|
||||
|
||||
### Hosting everything locally and privately (for using our optimised open-source models)
|
||||
If you are working with important data and don't want anything to leave your premises.
|
||||
|
||||
Make sure you set `SELF_HOSTED_MODEL` as true in you `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
|
||||
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
|
||||
|
@ -1,10 +1,10 @@
|
||||
If your AI uses external knowledge and is not explicit enough it is ok, because we try to make docsgpt friendly.
|
||||
If your AI uses external knowledge and is not explicit enough, it is ok, because we try to make DocsGPT friendly.
|
||||
|
||||
But if you want to adjust it, here is a simple way.
|
||||
But if you want to adjust it, here is a simple way:-
|
||||
|
||||
Got to `application/prompts/chat_combine_prompt.txt`
|
||||
- Got to `application/prompts/chat_combine_prompt.txt`
|
||||
|
||||
And change it to
|
||||
- And change it to
|
||||
|
||||
|
||||
```
|
||||
|
@ -20,7 +20,7 @@
|
||||
<div class="bg-indigo-500 text-white p-2 rounded-lg mb-2 self-start">
|
||||
<p class="text-sm">Hello, ask me anything about this library. Im here to help</p>
|
||||
</div>
|
||||
<div class="bg-blue-500 text-white p-2 rounded-lg mb-2 self-end">
|
||||
<div class="bg-purple-30 text-white p-2 rounded-lg mb-2 self-end">
|
||||
<p class="text-sm">How to create API key for Api gateway?</p>
|
||||
</div>
|
||||
<div class="bg-indigo-500 text-white p-2 rounded-lg mb-2 self-start">
|
||||
@ -46,7 +46,7 @@
|
||||
<div class=" flex mt-4 mb-2">
|
||||
<form id="message-form">
|
||||
<input id="message-input" class="bg-white p-2 rounded-lg ml-2 w-[26rem]" type="text" placeholder="Type your message here...">
|
||||
<button class="bg-blue-500 text-white p-2 rounded-lg ml-2 mr-2 ml-2" type="submit">Send</button>
|
||||
<button class="bg-purple-30 text-white p-2 rounded-lg ml-2 mr-2 ml-2" type="submit">Send</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
|
@ -3,7 +3,7 @@ document.getElementById("message-form").addEventListener("submit", function(even
|
||||
var message = document.getElementById("message-input").value;
|
||||
chrome.runtime.sendMessage({msg: "sendMessage", message: message}, function(response) {
|
||||
console.log(response.response);
|
||||
msg_html = '<div class="bg-blue-500 text-white p-2 rounded-lg mb-2 self-end"><p class="text-sm">'
|
||||
msg_html = '<div class="bg-purple-30 text-white p-2 rounded-lg mb-2 self-end"><p class="text-sm">'
|
||||
msg_html += message
|
||||
msg_html += '</p></div>'
|
||||
document.getElementById("messages").innerHTML += msg_html;
|
||||
|
2
extensions/react-widget/dist/index.es.js
vendored
@ -366,7 +366,7 @@ function fr() {
|
||||
}
|
||||
function ye(e) {
|
||||
if (Je(e))
|
||||
return g("The provided key is an unsupported type %s. This value must be coerced to a string before before using it here.", qe(e)), ge(e);
|
||||
return g("The provided key is an unsupported type %s. This value must be coerced to a string before using it here.", qe(e)), ge(e);
|
||||
}
|
||||
var W = O.ReactCurrentOwner, Be = {
|
||||
key: !0,
|
||||
|
2
extensions/react-widget/dist/index.es.js.map
vendored
2
extensions/react-widget/dist/index.umd.js
vendored
@ -1,3 +1,3 @@
|
||||
# Please put appropriate value
|
||||
VITE_API_HOST=http://localhost:7091
|
||||
VITE_API_HOST=http://0.0.0.0:7091
|
||||
VITE_API_STREAMING=true
|
26
frontend/package-lock.json
generated
@ -11,6 +11,7 @@
|
||||
"@reduxjs/toolkit": "^1.9.2",
|
||||
"@vercel/analytics": "^0.1.10",
|
||||
"react": "^18.2.0",
|
||||
"react-copy-to-clipboard": "^5.1.0",
|
||||
"react-dom": "^18.2.0",
|
||||
"react-dropzone": "^14.2.3",
|
||||
"react-markdown": "^8.0.7",
|
||||
@ -2248,6 +2249,14 @@
|
||||
"integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==",
|
||||
"dev": true
|
||||
},
|
||||
"node_modules/copy-to-clipboard": {
|
||||
"version": "3.3.3",
|
||||
"resolved": "https://registry.npmjs.org/copy-to-clipboard/-/copy-to-clipboard-3.3.3.tgz",
|
||||
"integrity": "sha512-2KV8NhB5JqC3ky0r9PMCAZKbUHSwtEo4CwCs0KXgruG43gX5PMqDEBbVU4OUzw2MuAWUfsuFmWvEKG5QRfSnJA==",
|
||||
"dependencies": {
|
||||
"toggle-selection": "^1.0.6"
|
||||
}
|
||||
},
|
||||
"node_modules/cosmiconfig": {
|
||||
"version": "7.1.0",
|
||||
"resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-7.1.0.tgz",
|
||||
@ -6072,6 +6081,18 @@
|
||||
"node": ">=0.10.0"
|
||||
}
|
||||
},
|
||||
"node_modules/react-copy-to-clipboard": {
|
||||
"version": "5.1.0",
|
||||
"resolved": "https://registry.npmjs.org/react-copy-to-clipboard/-/react-copy-to-clipboard-5.1.0.tgz",
|
||||
"integrity": "sha512-k61RsNgAayIJNoy9yDsYzDe/yAZAzEbEgcz3DZMhF686LEyukcE1hzurxe85JandPUG+yTfGVFzuEw3xt8WP/A==",
|
||||
"dependencies": {
|
||||
"copy-to-clipboard": "^3.3.1",
|
||||
"prop-types": "^15.8.1"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"react": "^15.3.0 || 16 || 17 || 18"
|
||||
}
|
||||
},
|
||||
"node_modules/react-dom": {
|
||||
"version": "18.2.0",
|
||||
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-18.2.0.tgz",
|
||||
@ -6907,6 +6928,11 @@
|
||||
"node": ">=8.0"
|
||||
}
|
||||
},
|
||||
"node_modules/toggle-selection": {
|
||||
"version": "1.0.6",
|
||||
"resolved": "https://registry.npmjs.org/toggle-selection/-/toggle-selection-1.0.6.tgz",
|
||||
"integrity": "sha512-BiZS+C1OS8g/q2RRbJmy59xpyghNBqrr6k5L/uKBGRsTfxmu3ffiRnd8mlGPUVayg8pvfi5urfnu8TU7DVOkLQ=="
|
||||
},
|
||||
"node_modules/trim-lines": {
|
||||
"version": "3.0.1",
|
||||
"resolved": "https://registry.npmjs.org/trim-lines/-/trim-lines-3.0.1.tgz",
|
||||
|
@ -22,6 +22,7 @@
|
||||
"@reduxjs/toolkit": "^1.9.2",
|
||||
"@vercel/analytics": "^0.1.10",
|
||||
"react": "^18.2.0",
|
||||
"react-copy-to-clipboard": "^5.1.0",
|
||||
"react-dom": "^18.2.0",
|
||||
"react-dropzone": "^14.2.3",
|
||||
"react-markdown": "^8.0.7",
|
||||
|
@ -4,7 +4,7 @@
|
||||
export default function About() {
|
||||
return (
|
||||
<div className="mx-5 grid min-h-screen md:mx-36">
|
||||
<article className=" place-items-left mx-auto my-auto flex w-full max-w-6xl flex-col gap-6 rounded-3xl bg-gray-100 p-6 pt-14 pb-14 text-jet lg:p-10 xl:p-16">
|
||||
<article className="place-items-left mx-auto my-auto mt-20 flex w-full max-w-6xl flex-col gap-6 rounded-3xl bg-gray-100 p-6 text-jet lg:p-10 xl:p-16">
|
||||
<div className="flex items-center">
|
||||
<p className="mr-2 text-3xl">About DocsGPT</p>
|
||||
<p className="text-[21px]">🦖</p>
|
||||
@ -51,9 +51,11 @@ export default function About() {
|
||||
</div>
|
||||
|
||||
<p>
|
||||
Currently It uses DocsGPT documentation, so it will respond to
|
||||
information relevant to DocsGPT. If you want to train it on different
|
||||
documentation - please follow
|
||||
Currently It uses{' '}
|
||||
<span className="text-blue-950 font-medium">DocsGPT</span>{' '}
|
||||
documentation, so it will respond to information relevant to{' '}
|
||||
<span className="text-blue-950 font-medium">DocsGPT</span> . If you
|
||||
want to train it on different documentation - please follow
|
||||
<a
|
||||
className="text-blue-500"
|
||||
href="https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation"
|
||||
|
@ -4,17 +4,22 @@ import Conversation from './conversation/Conversation';
|
||||
import About from './About';
|
||||
import { inject } from '@vercel/analytics';
|
||||
import { useMediaQuery } from './hooks';
|
||||
import { useState } from 'react';
|
||||
|
||||
inject();
|
||||
|
||||
export default function App() {
|
||||
const { isMobile } = useMediaQuery();
|
||||
const [navOpen, setNavOpen] = useState(!isMobile);
|
||||
|
||||
return (
|
||||
<div className="min-h-full min-w-full">
|
||||
<Navigation />
|
||||
<Navigation navOpen={navOpen} setNavOpen={setNavOpen} />
|
||||
<div
|
||||
className={`transition-all duration-200 ${
|
||||
!isMobile ? 'ml-0 md:ml-72 lg:ml-60' : 'ml-0 md:ml-16'
|
||||
!isMobile
|
||||
? `ml-0 ${!navOpen ? '-mt-5 md:mx-auto lg:mx-auto' : 'md:ml-72'}`
|
||||
: 'ml-0 md:ml-16'
|
||||
}`}
|
||||
>
|
||||
<Routes>
|
||||
|
@ -48,7 +48,7 @@ export default function Hero({ className = '' }: { className?: string }) {
|
||||
</div>
|
||||
</div>
|
||||
<div className=" rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/80 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tl-none md:rounded-bl-none">
|
||||
<div className="rounded-[45px] bg-white px-6 p-6 lg:rounded-tl-none lg:rounded-bl-none">
|
||||
<div className="rounded-[45px] bg-white p-6 px-6 lg:rounded-tl-none lg:rounded-bl-none">
|
||||
<img
|
||||
src="/message-programming.svg"
|
||||
alt="lock"
|
||||
|
@ -1,5 +1,5 @@
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import { NavLink } from 'react-router-dom';
|
||||
import { NavLink, useNavigate } from 'react-router-dom';
|
||||
import Arrow1 from './assets/arrow.svg';
|
||||
import Arrow2 from './assets/dropdown-arrow.svg';
|
||||
import Exit from './assets/exit.svg';
|
||||
@ -32,15 +32,20 @@ import { useMediaQuery, useOutsideAlerter } from './hooks';
|
||||
import Upload from './upload/Upload';
|
||||
import { Doc, getConversations } from './preferences/preferenceApi';
|
||||
import SelectDocsModal from './preferences/SelectDocsModal';
|
||||
import ConversationTile from './conversation/ConversationTile';
|
||||
|
||||
export default function Navigation() {
|
||||
interface NavigationProps {
|
||||
navOpen: boolean;
|
||||
setNavOpen: React.Dispatch<React.SetStateAction<boolean>>;
|
||||
}
|
||||
|
||||
export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
|
||||
const dispatch = useDispatch();
|
||||
const docs = useSelector(selectSourceDocs);
|
||||
const selectedDocs = useSelector(selectSelectedDocs);
|
||||
const conversations = useSelector(selectConversations);
|
||||
const conversationId = useSelector(selectConversationId);
|
||||
const { isMobile } = useMediaQuery();
|
||||
const [navOpen, setNavOpen] = useState(!isMobile);
|
||||
|
||||
const [isDocsListOpen, setIsDocsListOpen] = useState(false);
|
||||
|
||||
@ -60,29 +65,30 @@ export default function Navigation() {
|
||||
const embeddingsName =
|
||||
import.meta.env.VITE_EMBEDDINGS_NAME || 'openai_text-embedding-ada-002';
|
||||
|
||||
const navigate = useNavigate();
|
||||
|
||||
useEffect(() => {
|
||||
if (!conversations) {
|
||||
getConversations()
|
||||
.then((fetchedConversations) => {
|
||||
dispatch(setConversations(fetchedConversations));
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error('Failed to fetch conversations: ', error);
|
||||
});
|
||||
fetchConversations();
|
||||
}
|
||||
}, [conversations, dispatch]);
|
||||
|
||||
async function fetchConversations() {
|
||||
return await getConversations()
|
||||
.then((fetchedConversations) => {
|
||||
dispatch(setConversations(fetchedConversations));
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error('Failed to fetch conversations: ', error);
|
||||
});
|
||||
}
|
||||
|
||||
const handleDeleteConversation = (id: string) => {
|
||||
fetch(`${apiHost}/api/delete_conversation?id=${id}`, {
|
||||
method: 'POST',
|
||||
})
|
||||
.then(() => {
|
||||
// remove the image element from the DOM
|
||||
const imageElement = document.querySelector(
|
||||
`#img-${id}`,
|
||||
) as HTMLElement;
|
||||
const parentElement = imageElement.parentNode as HTMLElement;
|
||||
parentElement.parentNode?.removeChild(parentElement);
|
||||
fetchConversations();
|
||||
})
|
||||
.catch((error) => console.error(error));
|
||||
};
|
||||
@ -111,6 +117,7 @@ export default function Navigation() {
|
||||
})
|
||||
.then((response) => response.json())
|
||||
.then((data) => {
|
||||
navigate('/');
|
||||
dispatch(setConversation(data));
|
||||
dispatch(
|
||||
updateConversationId({
|
||||
@ -119,6 +126,29 @@ export default function Navigation() {
|
||||
);
|
||||
});
|
||||
};
|
||||
|
||||
async function updateConversationName(updatedConversation: {
|
||||
name: string;
|
||||
id: string;
|
||||
}) {
|
||||
await fetch(`${apiHost}/api/update_conversation_name`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(updatedConversation),
|
||||
})
|
||||
.then((response) => response.json())
|
||||
.then((data) => {
|
||||
if (data) {
|
||||
navigate('/');
|
||||
fetchConversations();
|
||||
}
|
||||
})
|
||||
.catch((err) => {
|
||||
console.error(err);
|
||||
});
|
||||
}
|
||||
useOutsideAlerter(
|
||||
navRef,
|
||||
() => {
|
||||
@ -144,15 +174,31 @@ export default function Navigation() {
|
||||
|
||||
return (
|
||||
<>
|
||||
{!navOpen && (
|
||||
<button
|
||||
className="duration-25 absolute relative top-3 left-3 z-20 hidden transition-all md:block"
|
||||
onClick={() => {
|
||||
setNavOpen(!navOpen);
|
||||
}}
|
||||
>
|
||||
<img
|
||||
src={Arrow1}
|
||||
alt="menu toggle"
|
||||
className={`${
|
||||
!navOpen ? 'rotate-180' : 'rotate-0'
|
||||
} m-auto w-3 transition-all duration-200`}
|
||||
/>
|
||||
</button>
|
||||
)}
|
||||
<div
|
||||
ref={navRef}
|
||||
className={`${
|
||||
!navOpen && '-ml-96 md:-ml-[14rem]'
|
||||
!navOpen && '-ml-96 md:-ml-[18rem]'
|
||||
} duration-20 fixed z-20 flex h-full w-72 flex-col border-r-2 bg-gray-50 transition-all`}
|
||||
>
|
||||
<div className={'visible h-16 w-full border-b-2 md:hidden'}>
|
||||
<div className={'visible h-16 w-full border-b-2 md:h-12'}>
|
||||
<button
|
||||
className="float-right mr-5 mt-5 h-5 w-5"
|
||||
className="float-right mr-5 mt-5 h-5 w-5 md:mt-3"
|
||||
onClick={() => {
|
||||
setNavOpen(!navOpen);
|
||||
}}
|
||||
@ -179,7 +225,7 @@ export default function Navigation() {
|
||||
className={({ isActive }) =>
|
||||
`${
|
||||
isActive && conversationId === null ? 'bg-gray-3000' : ''
|
||||
} my-auto mx-4 mt-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100`
|
||||
} my-auto mx-4 mt-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100`
|
||||
}
|
||||
>
|
||||
<img src={Message} className="ml-4 w-5"></img>
|
||||
@ -187,39 +233,17 @@ export default function Navigation() {
|
||||
</NavLink>
|
||||
<div className="conversations-container max-h-[25rem] overflow-y-auto">
|
||||
{conversations
|
||||
? conversations.map((conversation) => {
|
||||
return (
|
||||
<div
|
||||
key={conversation.id}
|
||||
onClick={() => {
|
||||
handleConversationClick(conversation.id);
|
||||
}}
|
||||
className={`my-auto mx-4 mt-4 flex h-12 cursor-pointer items-center justify-between gap-4 rounded-3xl hover:bg-gray-100 ${
|
||||
conversationId === conversation.id ? 'bg-gray-100' : ''
|
||||
}`}
|
||||
>
|
||||
<div className="flex gap-4">
|
||||
<img src={Message} className="ml-2 w-5"></img>
|
||||
<p className="my-auto text-eerie-black">
|
||||
{conversation.name}
|
||||
</p>
|
||||
</div>
|
||||
|
||||
{conversationId === conversation.id ? (
|
||||
<img
|
||||
src={Exit}
|
||||
alt="Exit"
|
||||
className="mr-4 h-3 w-3 cursor-pointer hover:opacity-50"
|
||||
id={`img-${conversation.id}`}
|
||||
onClick={(event) => {
|
||||
event.stopPropagation();
|
||||
handleDeleteConversation(conversation.id);
|
||||
}}
|
||||
/>
|
||||
) : null}
|
||||
</div>
|
||||
);
|
||||
})
|
||||
? conversations.map((conversation) => (
|
||||
<ConversationTile
|
||||
key={conversation.id}
|
||||
conversation={conversation}
|
||||
selectConversation={(id) => handleConversationClick(id)}
|
||||
onDeleteConversation={(id) => handleDeleteConversation(id)}
|
||||
onSave={(conversation) =>
|
||||
updateConversationName(conversation)
|
||||
}
|
||||
/>
|
||||
))
|
||||
: null}
|
||||
</div>
|
||||
|
||||
@ -227,7 +251,7 @@ export default function Navigation() {
|
||||
<div className="flex flex-col-reverse border-b-2">
|
||||
<div className="relative my-4 flex gap-2 px-2">
|
||||
<div
|
||||
className="flex h-12 min-w-[85%] cursor-pointer justify-between rounded-3xl rounded-md border-2 bg-white"
|
||||
className="flex h-12 w-full cursor-pointer justify-between rounded-3xl border-2 bg-white"
|
||||
onClick={() => setIsDocsListOpen(!isDocsListOpen)}
|
||||
>
|
||||
{selectedDocs && (
|
||||
@ -293,7 +317,7 @@ export default function Navigation() {
|
||||
</div>
|
||||
<div className="flex flex-col gap-2 border-b-2 py-2">
|
||||
<div
|
||||
className="my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
|
||||
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
|
||||
onClick={() => {
|
||||
setApiKeyModalState('ACTIVE');
|
||||
}}
|
||||
@ -307,7 +331,7 @@ export default function Navigation() {
|
||||
<NavLink
|
||||
to="/about"
|
||||
className={({ isActive }) =>
|
||||
`my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 ${
|
||||
`my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 ${
|
||||
isActive ? 'bg-gray-3000' : ''
|
||||
}`
|
||||
}
|
||||
@ -320,31 +344,30 @@ export default function Navigation() {
|
||||
href="https://docs.docsgpt.co.uk/"
|
||||
target="_blank"
|
||||
rel="noreferrer"
|
||||
className="my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
|
||||
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
|
||||
>
|
||||
<img src={Link} alt="link" className="ml-2 w-5" />
|
||||
<p className="my-auto text-eerie-black">Documentation</p>
|
||||
</a>
|
||||
<div className="border-t-2">
|
||||
<a
|
||||
href="https://discord.gg/WHJdfbQDR4"
|
||||
target="_blank"
|
||||
rel="noreferrer"
|
||||
className="my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
|
||||
>
|
||||
<img src={Discord} alt="link" className="ml-2 w-5" />
|
||||
<p className="my-auto text-eerie-black">Visit our Discord</p>
|
||||
</a>
|
||||
<a
|
||||
href="https://github.com/arc53/DocsGPT"
|
||||
target="_blank"
|
||||
rel="noreferrer"
|
||||
className="my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
|
||||
>
|
||||
<img src={Github} alt="link" className="ml-2 w-5" />
|
||||
<p className="my-auto text-eerie-black">Visit our GitHub</p>
|
||||
</a>
|
||||
</div>
|
||||
<a
|
||||
href="https://discord.gg/WHJdfbQDR4"
|
||||
target="_blank"
|
||||
rel="noreferrer"
|
||||
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
|
||||
>
|
||||
<img src={Discord} alt="link" className="ml-2 w-5" />
|
||||
<p className="my-auto text-eerie-black">Visit our Discord</p>
|
||||
</a>
|
||||
|
||||
<a
|
||||
href="https://github.com/arc53/DocsGPT"
|
||||
target="_blank"
|
||||
rel="noreferrer"
|
||||
className="my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
|
||||
>
|
||||
<img src={Github} alt="link" className="ml-2 w-5" />
|
||||
<p className="my-auto text-eerie-black">Visit our Github</p>
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
<div className="fixed h-16 w-full border-b-2 bg-gray-50 md:hidden">
|
||||
|
3
frontend/src/assets/checkMark.svg
Normal file
@ -0,0 +1,3 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="11" viewBox="0 0 14 11" fill="none">
|
||||
<path d="M4.95919 10.1906C4.84318 10.1902 4.72847 10.166 4.62222 10.1194C4.51596 10.0729 4.42041 10.0049 4.34152 9.91985L0.229353 5.54538C0.0756344 5.38157 -0.00671208 5.1634 0.000428491 4.93886C0.00756906 4.71433 0.103612 4.50183 0.267428 4.34812C0.431245 4.1944 0.649417 4.11205 0.873948 4.11919C1.09848 4.12633 1.31098 4.22238 1.4647 4.38619L4.95073 8.10068L12.0666 0.316329C12.1389 0.226405 12.2287 0.152193 12.3306 0.0982513C12.4326 0.0443098 12.5445 0.0117775 12.6594 0.00265255C12.7744 -0.00647237 12.89 0.00800286 12.9992 0.045189C13.1084 0.082375 13.2088 0.141487 13.2943 0.218894C13.3798 0.296301 13.4485 0.390369 13.4964 0.49532C13.5442 0.600272 13.57 0.713891 13.5723 0.829198C13.5746 0.944506 13.5534 1.05907 13.5098 1.16585C13.4662 1.27263 13.4012 1.36937 13.3189 1.45014L5.58533 9.91139C5.50718 9.998 5.41197 10.0675 5.30567 10.1156C5.19938 10.1636 5.0843 10.1892 4.96766 10.1906H4.95919Z" fill="#747474"/>
|
||||
</svg>
|
After Width: | Height: | Size: 1.0 KiB |
3
frontend/src/assets/checkmark.svg
Normal file
@ -0,0 +1,3 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="14" height="11" viewBox="0 0 14 11" fill="none">
|
||||
<path d="M4.95919 10.1906C4.84318 10.1902 4.72847 10.166 4.62222 10.1194C4.51596 10.0729 4.42041 10.0049 4.34152 9.91985L0.229353 5.54538C0.0756344 5.38157 -0.00671208 5.1634 0.000428491 4.93886C0.00756906 4.71433 0.103612 4.50183 0.267428 4.34812C0.431245 4.1944 0.649417 4.11205 0.873948 4.11919C1.09848 4.12633 1.31098 4.22238 1.4647 4.38619L4.95073 8.10068L12.0666 0.316329C12.1389 0.226405 12.2287 0.152193 12.3306 0.0982513C12.4326 0.0443098 12.5445 0.0117775 12.6594 0.00265255C12.7744 -0.00647237 12.89 0.00800286 12.9992 0.045189C13.1084 0.082375 13.2088 0.141487 13.2943 0.218894C13.3798 0.296301 13.4485 0.390369 13.4964 0.49532C13.5442 0.600272 13.57 0.713891 13.5723 0.829198C13.5746 0.944506 13.5534 1.05907 13.5098 1.16585C13.4662 1.27263 13.4012 1.36937 13.3189 1.45014L5.58533 9.91139C5.50718 9.998 5.41197 10.0675 5.30567 10.1156C5.19938 10.1636 5.0843 10.1892 4.96766 10.1906H4.95919Z" fill="#747474"/>
|
||||
</svg>
|
After Width: | Height: | Size: 1.0 KiB |
3
frontend/src/assets/copy.svg
Normal file
@ -0,0 +1,3 @@
|
||||
<svg width="14" height="17" stroke-width="1.15" viewBox="0 0 14 17" >
|
||||
<path d="M13.8013 5.01282L8.80645 0.191795C8.67953 0.0691399 8.50734 0.000152609 8.32774 0H6.09677C5.43801 0 4.80623 0.252586 4.34041 0.702193C3.8746 1.1518 3.6129 1.7616 3.6129 2.39744V3.48718H2.48387C1.82511 3.48718 1.19332 3.73977 0.727509 4.18937C0.261693 4.63898 0 5.24878 0 5.88462V14.6026C0 15.2384 0.261693 15.8482 0.727509 16.2978C1.19332 16.7474 1.82511 17 2.48387 17H8.80645C9.46521 17 10.097 16.7474 10.5628 16.2978C11.0286 15.8482 11.2903 15.2384 11.2903 14.6026V13.5128H11.5161C12.1749 13.5128 12.8067 13.2602 13.2725 12.8106C13.7383 12.361 14 11.7512 14 11.1154V5.44872C13.9929 5.28447 13.9219 5.12884 13.8013 5.01282ZM9.03226 2.23179L11.6877 4.79487H9.03226V2.23179ZM9.93548 14.6026C9.93548 14.8916 9.81653 15.1688 9.6048 15.3731C9.39306 15.5775 9.10589 15.6923 8.80645 15.6923H2.48387C2.18443 15.6923 1.89726 15.5775 1.68552 15.3731C1.47379 15.1688 1.35484 14.8916 1.35484 14.6026V5.88462C1.35484 5.5956 1.47379 5.31842 1.68552 5.11405C1.89726 4.90968 2.18443 4.79487 2.48387 4.79487H3.6129V11.1154C3.6129 11.7512 3.8746 12.361 4.34041 12.8106C4.80623 13.2602 5.43801 13.5128 6.09677 13.5128H9.93548V14.6026ZM11.5161 12.2051H6.09677C5.79734 12.2051 5.51016 12.0903 5.29843 11.886C5.08669 11.6816 4.96774 11.4044 4.96774 11.1154V2.39744C4.96774 2.10842 5.08669 1.83124 5.29843 1.62687C5.51016 1.4225 5.79734 1.30769 6.09677 1.30769H7.67742V5.44872C7.67976 5.62143 7.75188 5.78643 7.87842 5.90856C8.00496 6.03069 8.1759 6.10031 8.35484 6.10256H12.6452V11.1154C12.6452 11.4044 12.5262 11.6816 12.3145 11.886C12.1027 12.0903 11.8156 12.2051 11.5161 12.2051Z" fill="#949494"/>
|
||||
</svg>
|
After Width: | Height: | Size: 1.6 KiB |
@ -1 +1 @@
|
||||
<svg width="256px" height="256px" viewBox="0 -28.5 256 256" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" preserveAspectRatio="xMidYMid" fill="#000000"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"> <g> <path d="M216.856339,16.5966031 C200.285002,8.84328665 182.566144,3.2084988 164.041564,0 C161.766523,4.11318106 159.108624,9.64549908 157.276099,14.0464379 C137.583995,11.0849896 118.072967,11.0849896 98.7430163,14.0464379 C96.9108417,9.64549908 94.1925838,4.11318106 91.8971895,0 C73.3526068,3.2084988 55.6133949,8.86399117 39.0420583,16.6376612 C5.61752293,67.146514 -3.4433191,116.400813 1.08711069,164.955721 C23.2560196,181.510915 44.7403634,191.567697 65.8621325,198.148576 C71.0772151,190.971126 75.7283628,183.341335 79.7352139,175.300261 C72.104019,172.400575 64.7949724,168.822202 57.8887866,164.667963 C59.7209612,163.310589 61.5131304,161.891452 63.2445898,160.431257 C105.36741,180.133187 151.134928,180.133187 192.754523,160.431257 C194.506336,161.891452 196.298154,163.310589 198.110326,164.667963 C191.183787,168.842556 183.854737,172.420929 176.223542,175.320965 C180.230393,183.341335 184.861538,190.991831 190.096624,198.16893 C211.238746,191.588051 232.743023,181.531619 254.911949,164.955721 C260.227747,108.668201 245.831087,59.8662432 216.856339,16.5966031 Z M85.4738752,135.09489 C72.8290281,135.09489 62.4592217,123.290155 62.4592217,108.914901 C62.4592217,94.5396472 72.607595,82.7145587 85.4738752,82.7145587 C98.3405064,82.7145587 108.709962,94.5189427 108.488529,108.914901 C108.508531,123.290155 98.3405064,135.09489 85.4738752,135.09489 Z M170.525237,135.09489 C157.88039,135.09489 147.510584,123.290155 147.510584,108.914901 C147.510584,94.5396472 157.658606,82.7145587 170.525237,82.7145587 C183.391518,82.7145587 193.761324,94.5189427 193.539891,108.914901 C193.539891,123.290155 183.391518,135.09489 170.525237,135.09489 Z" fill="#61626b" fill-rule="nonzero"> </path> </g> </g></svg>
|
||||
<svg fill="none" width="20" height="20" viewBox="0 0 22 20" xmlns="http://www.w3.org/2000/svg" xml:space="preserve"><path d="M18.942 5.556a16.299 16.299 0 0 0-4.126-1.297c-.178.321-.385.754-.529 1.097a15.175 15.175 0 0 0-4.573 0 11.583 11.583 0 0 0-.535-1.097 16.274 16.274 0 0 0-4.129 1.3c-2.611 3.946-3.319 7.794-2.965 11.587a16.494 16.494 0 0 0 5.061 2.593 12.65 12.65 0 0 0 1.084-1.785 10.689 10.689 0 0 1-1.707-.831c.143-.106.283-.217.418-.331 3.291 1.539 6.866 1.539 10.118 0 .137.114.277.225.418.331-.541.326-1.114.606-1.71.832a12.52 12.52 0 0 0 1.084 1.785 16.46 16.46 0 0 0 5.064-2.595c.415-4.396-.709-8.209-2.973-11.589zM8.678 14.813c-.988 0-1.798-.922-1.798-2.045s.793-2.047 1.798-2.047 1.815.922 1.798 2.047c.001 1.123-.793 2.045-1.798 2.045zm6.644 0c-.988 0-1.798-.922-1.798-2.045s.793-2.047 1.798-2.047 1.815.922 1.798 2.047c0 1.123-.793 2.045-1.798 2.045z" fill="black" fill-opacity="0.54"/></svg>
|
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 912 B |
3
frontend/src/assets/edit.svg
Normal file
@ -0,0 +1,3 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="15" viewBox="0 0 16 15" fill="none">
|
||||
<path d="M10.0588 2.74568L12.5294 5.15732M8.41176 14H15M1.82353 10.7845L1 14L4.29412 13.1961L13.8355 3.88237C14.1443 3.58087 14.3178 3.172 14.3178 2.74568C14.3178 2.31936 14.1443 1.9105 13.8355 1.609L13.6939 1.47073C13.385 1.16932 12.9662 1 12.5294 1C12.0927 1 11.6738 1.16932 11.3649 1.47073L1.82353 10.7845Z" stroke="#747474" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
|
||||
</svg>
|
After Width: | Height: | Size: 500 B |
@ -1 +1,5 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" height="1em" viewBox="0 0 448 512"><!--! Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license (Commercial License) Copyright 2023 Fonticons, Inc. --><style>svg{fill:#4c4d52}</style><path d="M400 32H48C21.5 32 0 53.5 0 80v352c0 26.5 21.5 48 48 48h352c26.5 0 48-21.5 48-48V80c0-26.5-21.5-48-48-48zM277.3 415.7c-8.4 1.5-11.5-3.7-11.5-8 0-5.4.2-33 .2-55.3 0-15.6-5.2-25.5-11.3-30.7 37-4.1 76-9.2 76-73.1 0-18.2-6.5-27.3-17.1-39 1.7-4.3 7.4-22-1.7-45-13.9-4.3-45.7 17.9-45.7 17.9-13.2-3.7-27.5-5.6-41.6-5.6-14.1 0-28.4 1.9-41.6 5.6 0 0-31.8-22.2-45.7-17.9-9.1 22.9-3.5 40.6-1.7 45-10.6 11.7-15.6 20.8-15.6 39 0 63.6 37.3 69 74.3 73.1-4.8 4.3-9.1 11.7-10.6 22.3-9.5 4.3-33.8 11.7-48.3-13.9-9.1-15.8-25.5-17.1-25.5-17.1-16.2-.2-1.1 10.2-1.1 10.2 10.8 5 18.4 24.2 18.4 24.2 9.7 29.7 56.1 19.7 56.1 19.7 0 13.9.2 36.5.2 40.6 0 4.3-3 9.5-11.5 8-66-22.1-112.2-84.9-112.2-158.3 0-91.8 70.2-161.5 162-161.5S388 165.6 388 257.4c.1 73.4-44.7 136.3-110.7 158.3zm-98.1-61.1c-1.9.4-3.7-.4-3.9-1.7-.2-1.5 1.1-2.8 3-3.2 1.9-.2 3.7.6 3.9 1.9.3 1.3-1 2.6-3 3zm-9.5-.9c0 1.3-1.5 2.4-3.5 2.4-2.2.2-3.7-.9-3.7-2.4 0-1.3 1.5-2.4 3.5-2.4 1.9-.2 3.7.9 3.7 2.4zm-13.7-1.1c-.4 1.3-2.4 1.9-4.1 1.3-1.9-.4-3.2-1.9-2.8-3.2.4-1.3 2.4-1.9 4.1-1.5 2 .6 3.3 2.1 2.8 3.4zm-12.3-5.4c-.9 1.1-2.8.9-4.3-.6-1.5-1.3-1.9-3.2-.9-4.1.9-1.1 2.8-.9 4.3.6 1.3 1.3 1.8 3.3.9 4.1zm-9.1-9.1c-.9.6-2.6 0-3.7-1.5s-1.1-3.2 0-3.9c1.1-.9 2.8-.2 3.7 1.3 1.1 1.5 1.1 3.3 0 4.1zm-6.5-9.7c-.9.9-2.4.4-3.5-.6-1.1-1.3-1.3-2.8-.4-3.5.9-.9 2.4-.4 3.5.6 1.1 1.3 1.3 2.8.4 3.5zm-6.7-7.4c-.4.9-1.7 1.1-2.8.4-1.3-.6-1.9-1.7-1.5-2.6.4-.6 1.5-.9 2.8-.4 1.3.7 1.9 1.8 1.5 2.6z"/></svg>
|
||||
<svg width="800px" height="800px" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
|
||||
<title>github</title>
|
||||
<rect width="24" height="24" fill="none"/>
|
||||
<path d="M12,2A10,10,0,0,0,8.84,21.5c.5.08.66-.23.66-.5V19.31C6.73,19.91,6.14,18,6.14,18A2.69,2.69,0,0,0,5,16.5c-.91-.62.07-.6.07-.6a2.1,2.1,0,0,1,1.53,1,2.15,2.15,0,0,0,2.91.83,2.16,2.16,0,0,1,.63-1.34C8,16.17,5.62,15.31,5.62,11.5a3.87,3.87,0,0,1,1-2.71,3.58,3.58,0,0,1,.1-2.64s.84-.27,2.75,1a9.63,9.63,0,0,1,5,0c1.91-1.29,2.75-1,2.75-1a3.58,3.58,0,0,1,.1,2.64,3.87,3.87,0,0,1,1,2.71c0,3.82-2.34,4.66-4.57,4.91a2.39,2.39,0,0,1,.69,1.85V21c0,.27.16.59.67.5A10,10,0,0,0,12,2Z" fill="black" fill-opacity="0.54"/>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 1.7 KiB After Width: | Height: | Size: 679 B |
3
frontend/src/assets/trash.svg
Normal file
@ -0,0 +1,3 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="12" height="15" viewBox="0 0 12 15" fill="none">
|
||||
<path d="M0.857143 13.3333C0.857143 13.7754 1.03775 14.1993 1.35925 14.5118C1.68074 14.8244 2.11677 15 2.57143 15H9.42857C9.88323 15 10.3193 14.8244 10.6408 14.5118C10.9622 14.1993 11.1429 13.7754 11.1429 13.3333V3.33333H0.857143V13.3333ZM2.57143 5H9.42857V13.3333H2.57143V5ZM9 0.833333L8.14286 0H3.85714L3 0.833333H0V2.5H12V0.833333H9Z" fill="#747474"/>
|
||||
</svg>
|
After Width: | Height: | Size: 459 B |
@ -29,6 +29,13 @@ export default function Conversation() {
|
||||
scrollIntoView();
|
||||
}, [queries.length, queries[queries.length - 1]]);
|
||||
|
||||
useEffect(() => {
|
||||
const element = document.getElementById('inputbox') as HTMLInputElement;
|
||||
if (element) {
|
||||
element.focus();
|
||||
}
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
const observerCallback: IntersectionObserverCallback = (entries) => {
|
||||
entries.forEach((entry) => {
|
||||
@ -81,7 +88,7 @@ export default function Conversation() {
|
||||
responseView = (
|
||||
<ConversationBubble
|
||||
ref={endMessageRef}
|
||||
className={`${index === queries.length - 1 ? 'mb-24' : 'mb-7'}`}
|
||||
className={`${index === queries.length - 1 ? 'mb-32' : 'mb-7'}`}
|
||||
key={`${index}ERROR`}
|
||||
message={query.error}
|
||||
type="ERROR"
|
||||
@ -91,7 +98,7 @@ export default function Conversation() {
|
||||
responseView = (
|
||||
<ConversationBubble
|
||||
ref={endMessageRef}
|
||||
className={`${index === queries.length - 1 ? 'mb-24' : 'mb-7'}`}
|
||||
className={`${index === queries.length - 1 ? 'mb-32' : 'mb-7'}`}
|
||||
key={`${index}ANSWER`}
|
||||
message={query.response}
|
||||
type={'ANSWER'}
|
||||
@ -134,7 +141,7 @@ export default function Conversation() {
|
||||
return (
|
||||
<Fragment key={index}>
|
||||
<ConversationBubble
|
||||
className={'mb-7'}
|
||||
className={'last:mb-27 mb-7'}
|
||||
key={`${index}QUESTION`}
|
||||
message={query.prompt}
|
||||
type="QUESTION"
|
||||
@ -149,14 +156,16 @@ export default function Conversation() {
|
||||
{queries.length === 0 && (
|
||||
<Hero className="mt-24 h-[100vh] md:mt-52"></Hero>
|
||||
)}
|
||||
<div className="relative bottom-0 flex w-10/12 flex-col items-end self-center md:fixed md:w-[50%]">
|
||||
<div className="relative bottom-0 flex w-10/12 flex-col items-end self-center bg-white pt-3 md:fixed md:w-[65%]">
|
||||
<div className="flex h-full w-full">
|
||||
<div
|
||||
id="inputbox"
|
||||
ref={inputRef}
|
||||
tabIndex={1}
|
||||
placeholder="Type your message here..."
|
||||
contentEditable
|
||||
onPaste={handlePaste}
|
||||
className={`border-000000 overflow-x-hidden; max-h-24 min-h-[2.6rem] w-full overflow-y-auto whitespace-pre-wrap rounded-xl border bg-white py-2 pl-4 pr-9 leading-7 opacity-100 focus:outline-none`}
|
||||
className={`border-000000 overflow-x-hidden; max-h-24 min-h-[2.6rem] w-full overflow-y-auto whitespace-pre-wrap rounded-3xl border bg-white py-2 pl-4 pr-9 text-base leading-7 opacity-100 focus:outline-none`}
|
||||
onKeyDown={(e) => {
|
||||
if (e.key === 'Enter' && !e.shiftKey) {
|
||||
e.preventDefault();
|
||||
|
@ -4,7 +4,10 @@ import { FEEDBACK, MESSAGE_TYPE } from './conversationModels';
|
||||
import Alert from './../assets/alert.svg';
|
||||
import { ReactComponent as Like } from './../assets/like.svg';
|
||||
import { ReactComponent as Dislike } from './../assets/dislike.svg';
|
||||
import { ReactComponent as Copy } from './../assets/copy.svg';
|
||||
import { ReactComponent as Checkmark } from './../assets/checkmark.svg';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
import copy from 'copy-to-clipboard';
|
||||
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
|
||||
import { vscDarkPlus } from 'react-syntax-highlighter/dist/cjs/styles/prism';
|
||||
|
||||
@ -26,6 +29,17 @@ const ConversationBubble = forwardRef<
|
||||
) {
|
||||
const [showFeedback, setShowFeedback] = useState(false);
|
||||
const [openSource, setOpenSource] = useState<number | null>(null);
|
||||
const [copied, setCopied] = useState(false);
|
||||
|
||||
const handleCopyClick = (text: string) => {
|
||||
copy(text);
|
||||
setCopied(true);
|
||||
// Reset copied to false after a few seconds
|
||||
setTimeout(() => {
|
||||
setCopied(false);
|
||||
}, 2000);
|
||||
};
|
||||
|
||||
const List = ({
|
||||
ordered,
|
||||
children,
|
||||
@ -42,7 +56,7 @@ const ConversationBubble = forwardRef<
|
||||
bubble = (
|
||||
<div ref={ref} className={`flex flex-row-reverse self-end ${className}`}>
|
||||
<Avatar className="mt-2 text-2xl" avatar="🧑💻"></Avatar>
|
||||
<div className="mr-2 ml-10 flex items-center rounded-3xl bg-blue-1000 p-3.5 text-white">
|
||||
<div className="mr-2 ml-10 flex items-center rounded-3xl bg-purple-30 p-3.5 text-white">
|
||||
<ReactMarkdown className="whitespace-pre-wrap break-all">
|
||||
{message}
|
||||
</ReactMarkdown>
|
||||
@ -62,15 +76,15 @@ const ConversationBubble = forwardRef<
|
||||
<div
|
||||
className={`ml-2 mr-5 flex flex-col items-center rounded-3xl bg-gray-1000 p-3.5 ${
|
||||
type === 'ERROR'
|
||||
? ' rounded-lg border border-red-2000 bg-red-1000 p-2 text-red-3000'
|
||||
: ''
|
||||
? 'flex-row rounded-full border border-transparent bg-[#FFE7E7] p-2 py-5 text-sm font-normal text-red-3000 dark:border-red-2000 dark:text-white'
|
||||
: 'flex-col rounded-3xl'
|
||||
}`}
|
||||
>
|
||||
{type === 'ERROR' && (
|
||||
<img src={Alert} alt="alert" className="mr-2 inline" />
|
||||
)}
|
||||
<ReactMarkdown
|
||||
className="whitespace-pre-wrap break-words"
|
||||
className="max-w-screen-md whitespace-pre-wrap break-words"
|
||||
components={{
|
||||
code({ node, inline, className, children, ...props }) {
|
||||
const match = /language-(\w+)/.exec(className || '');
|
||||
@ -101,18 +115,14 @@ const ConversationBubble = forwardRef<
|
||||
{message}
|
||||
</ReactMarkdown>
|
||||
{DisableSourceFE || type === 'ERROR' ? null : (
|
||||
<span className="mt-3 h-px w-full bg-[#DEDEDE]"></span>
|
||||
)}
|
||||
<div className="mt-3 flex w-full flex-row flex-wrap items-center justify-start gap-2">
|
||||
{DisableSourceFE || type === 'ERROR' ? null : (
|
||||
<div className="py-1 px-2 text-base font-semibold">
|
||||
Sources:
|
||||
</div>
|
||||
)}
|
||||
<div className="flex flex-row flex-wrap items-center justify-start gap-2">
|
||||
{DisableSourceFE
|
||||
? null
|
||||
: sources?.map((source, index) => (
|
||||
<>
|
||||
<span className="mt-3 h-px w-full bg-[#DEDEDE]"></span>
|
||||
<div className="mt-3 flex w-full flex-row flex-wrap items-center justify-start gap-2">
|
||||
<div className="py-1 px-2 text-base font-semibold">
|
||||
Sources:
|
||||
</div>
|
||||
<div className="flex flex-row flex-wrap items-center justify-start gap-2">
|
||||
{sources?.map((source, index) => (
|
||||
<div
|
||||
key={index}
|
||||
className={`max-w-fit cursor-pointer rounded-[28px] py-1 px-4 ${
|
||||
@ -131,38 +141,56 @@ const ConversationBubble = forwardRef<
|
||||
: 'text-[#007DFF]'
|
||||
}`}
|
||||
>
|
||||
{index + 1}. {source.title}
|
||||
{index + 1}. {source.title.substring(0, 45)}
|
||||
</p>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
<div
|
||||
className={`mr-2 flex items-center justify-center ${
|
||||
className={`relative mr-2 flex items-center justify-center ${
|
||||
type !== 'ERROR' && showFeedback ? '' : 'md:invisible'
|
||||
}`}
|
||||
>
|
||||
{copied ? (
|
||||
<Checkmark className="absolute left-2 top-4" />
|
||||
) : (
|
||||
<Copy
|
||||
className={`absolute left-2 top-4 cursor-pointer fill-gray-4000 hover:stroke-gray-4000`}
|
||||
onClick={() => {
|
||||
handleCopyClick(message);
|
||||
}}
|
||||
></Copy>
|
||||
)}
|
||||
</div>
|
||||
<div
|
||||
className={`relative mr-2 flex items-center justify-center ${
|
||||
feedback === 'LIKE' || (type !== 'ERROR' && showFeedback)
|
||||
? ''
|
||||
: 'md:invisible'
|
||||
}`}
|
||||
>
|
||||
<Like
|
||||
className={`cursor-pointer ${
|
||||
className={`absolute left-6 top-4 cursor-pointer ${
|
||||
feedback === 'LIKE'
|
||||
? 'fill-blue-1000 stroke-blue-1000'
|
||||
? 'fill-purple-30 stroke-purple-30'
|
||||
: 'fill-none stroke-gray-4000 hover:fill-gray-4000'
|
||||
}`}
|
||||
onClick={() => handleFeedback?.('LIKE')}
|
||||
></Like>
|
||||
</div>
|
||||
<div
|
||||
className={`mr-10 flex items-center justify-center ${
|
||||
className={`relative mr-10 flex items-center justify-center ${
|
||||
feedback === 'DISLIKE' || (type !== 'ERROR' && showFeedback)
|
||||
? ''
|
||||
: 'md:invisible'
|
||||
}`}
|
||||
>
|
||||
<Dislike
|
||||
className={`cursor-pointer ${
|
||||
className={`absolute left-10 top-4 cursor-pointer ${
|
||||
feedback === 'DISLIKE'
|
||||
? 'fill-red-2000 stroke-red-2000'
|
||||
: 'fill-none stroke-gray-4000 hover:fill-gray-4000'
|
||||
@ -173,15 +201,13 @@ const ConversationBubble = forwardRef<
|
||||
</div>
|
||||
|
||||
{sources && openSource !== null && sources[openSource] && (
|
||||
<div className="ml-8 mt-2 w-3/4 rounded-xl bg-blue-200 p-2">
|
||||
<p className="w-3/4 truncate text-xs text-gray-500">
|
||||
<div className="ml-10 mt-2 max-w-[800px] rounded-xl bg-blue-200 p-2">
|
||||
<p className="m-1 w-3/4 truncate text-xs text-gray-500">
|
||||
Source: {sources[openSource].title}
|
||||
</p>
|
||||
|
||||
<div className="rounded-xl border-2 border-gray-200 bg-white p-2">
|
||||
<p className="text-xs text-gray-500 ">
|
||||
{sources[openSource].text}
|
||||
</p>
|
||||
<div className="m-2 rounded-xl border-2 border-gray-200 bg-white p-2">
|
||||
<p className="text-black">{sources[openSource].text}</p>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
128
frontend/src/conversation/ConversationTile.tsx
Normal file
@ -0,0 +1,128 @@
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import { useSelector } from 'react-redux';
|
||||
import Edit from '../assets/edit.svg';
|
||||
import Exit from '../assets/exit.svg';
|
||||
import Message from '../assets/message.svg';
|
||||
import CheckMark from '../assets/checkmark.svg';
|
||||
import Trash from '../assets/trash.svg';
|
||||
|
||||
import { selectConversationId } from '../preferences/preferenceSlice';
|
||||
import { useOutsideAlerter } from '../hooks';
|
||||
|
||||
interface ConversationProps {
|
||||
name: string;
|
||||
id: string;
|
||||
}
|
||||
interface ConversationTileProps {
|
||||
conversation: ConversationProps;
|
||||
selectConversation: (arg1: string) => void;
|
||||
onDeleteConversation: (arg1: string) => void;
|
||||
onSave: ({ name, id }: ConversationProps) => void;
|
||||
}
|
||||
|
||||
export default function ConversationTile({
|
||||
conversation,
|
||||
selectConversation,
|
||||
onDeleteConversation,
|
||||
onSave,
|
||||
}: ConversationTileProps) {
|
||||
const conversationId = useSelector(selectConversationId);
|
||||
const tileRef = useRef<HTMLInputElement>(null);
|
||||
|
||||
const [isEdit, setIsEdit] = useState(false);
|
||||
const [conversationName, setConversationsName] = useState('');
|
||||
useOutsideAlerter(
|
||||
tileRef,
|
||||
() =>
|
||||
handleSaveConversation({
|
||||
id: conversationId || conversation.id,
|
||||
name: conversationName,
|
||||
}),
|
||||
[conversationName],
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
setConversationsName(conversation.name);
|
||||
}, [conversation.name]);
|
||||
|
||||
function handleEditConversation() {
|
||||
setIsEdit(true);
|
||||
}
|
||||
|
||||
function handleSaveConversation(changedConversation: ConversationProps) {
|
||||
if (changedConversation.name.trim().length) {
|
||||
onSave(changedConversation);
|
||||
setIsEdit(false);
|
||||
} else {
|
||||
onClear();
|
||||
}
|
||||
}
|
||||
|
||||
function onClear() {
|
||||
setConversationsName(conversation.name);
|
||||
setIsEdit(false);
|
||||
}
|
||||
return (
|
||||
<div
|
||||
ref={tileRef}
|
||||
onClick={() => {
|
||||
selectConversation(conversation.id);
|
||||
}}
|
||||
className={`my-auto mx-4 mt-4 flex h-12 cursor-pointer items-center justify-between gap-4 rounded-3xl hover:bg-gray-100 ${
|
||||
conversationId === conversation.id ? 'bg-gray-100' : ''
|
||||
}`}
|
||||
>
|
||||
<div
|
||||
className={`flex ${
|
||||
conversationId === conversation.id ? 'w-[75%]' : 'w-[95%]'
|
||||
} gap-4`}
|
||||
>
|
||||
<img src={Message} className="ml-2 w-5"></img>
|
||||
{isEdit ? (
|
||||
<input
|
||||
autoFocus
|
||||
type="text"
|
||||
className="h-6 w-full px-1 text-sm font-normal leading-6 outline-[#0075FF] focus:outline-1"
|
||||
value={conversationName}
|
||||
onChange={(e) => setConversationsName(e.target.value)}
|
||||
/>
|
||||
) : (
|
||||
<p className="my-auto overflow-hidden overflow-ellipsis whitespace-nowrap text-sm font-normal leading-6 text-eerie-black">
|
||||
{conversationName}
|
||||
</p>
|
||||
)}
|
||||
</div>
|
||||
{conversationId === conversation.id ? (
|
||||
<div className="flex">
|
||||
<img
|
||||
src={isEdit ? CheckMark : Edit}
|
||||
alt="Edit"
|
||||
className="mr-2 h-4 w-4 cursor-pointer hover:opacity-50"
|
||||
id={`img-${conversation.id}`}
|
||||
onClick={(event) => {
|
||||
event.stopPropagation();
|
||||
isEdit
|
||||
? handleSaveConversation({
|
||||
id: conversationId,
|
||||
name: conversationName,
|
||||
})
|
||||
: handleEditConversation();
|
||||
}}
|
||||
/>
|
||||
<img
|
||||
src={isEdit ? Exit : Trash}
|
||||
alt="Exit"
|
||||
className={`mr-4 ${
|
||||
isEdit ? 'h-3 w-3' : 'h-4 w-4'
|
||||
}mt-px cursor-pointer hover:opacity-50`}
|
||||
id={`img-${conversation.id}`}
|
||||
onClick={(event) => {
|
||||
event.stopPropagation();
|
||||
isEdit ? onClear() : onDeleteConversation(conversation.id);
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
) : null}
|
||||
</div>
|
||||
);
|
||||
}
|
@ -142,14 +142,14 @@ export const conversationSlice = createSlice({
|
||||
state,
|
||||
action: PayloadAction<{ index: number; query: Partial<Query> }>,
|
||||
) {
|
||||
const index = action.payload.index;
|
||||
if (action.payload.query.response) {
|
||||
const { index, query } = action.payload;
|
||||
if (query.response) {
|
||||
state.queries[index].response =
|
||||
(state.queries[index].response || '') + action.payload.query.response;
|
||||
(state.queries[index].response || '') + query.response;
|
||||
} else {
|
||||
state.queries[index] = {
|
||||
...state.queries[index],
|
||||
...action.payload.query,
|
||||
...query,
|
||||
};
|
||||
}
|
||||
},
|
||||
@ -163,21 +163,21 @@ export const conversationSlice = createSlice({
|
||||
state,
|
||||
action: PayloadAction<{ index: number; query: Partial<Query> }>,
|
||||
) {
|
||||
const index = action.payload.index;
|
||||
const { index, query } = action.payload;
|
||||
if (!state.queries[index].sources) {
|
||||
state.queries[index].sources = [action.payload.query.sources![0]];
|
||||
state.queries[index].sources = [query.sources![0]];
|
||||
} else {
|
||||
state.queries[index].sources!.push(action.payload.query.sources![0]);
|
||||
state.queries[index].sources!.push(query.sources![0]);
|
||||
}
|
||||
},
|
||||
updateQuery(
|
||||
state,
|
||||
action: PayloadAction<{ index: number; query: Partial<Query> }>,
|
||||
) {
|
||||
const index = action.payload.index;
|
||||
const { index, query } = action.payload;
|
||||
state.queries[index] = {
|
||||
...state.queries[index],
|
||||
...action.payload.query,
|
||||
...query,
|
||||
};
|
||||
},
|
||||
setStatus(state, action: PayloadAction<Status>) {
|
||||
|
@ -41,9 +41,9 @@ export default function Upload({
|
||||
<p className="mt-10 text-2xl">{progress?.percentage || 0}%</p>
|
||||
|
||||
<div className="mb-10 w-[50%]">
|
||||
<div className="h-1 w-[100%] bg-blue-4000"></div>
|
||||
<div className="h-1 w-[100%] bg-purple-30"></div>
|
||||
<div
|
||||
className={`relative bottom-1 h-1 bg-blue-5000 transition-all`}
|
||||
className={`relative bottom-1 h-1 bg-purple-30 transition-all`}
|
||||
style={{ width: `${progress?.percentage || 0}%` }}
|
||||
></div>
|
||||
</div>
|
||||
@ -55,7 +55,7 @@ export default function Upload({
|
||||
setProgress(undefined);
|
||||
setModalState('INACTIVE');
|
||||
}}
|
||||
className={`rounded-3xl bg-blue-3000 px-4 py-2 text-sm font-medium text-white ${
|
||||
className={`rounded-3xl bg-purple-30 px-4 py-2 text-sm font-medium text-white ${
|
||||
isCancellable ? '' : 'hidden'
|
||||
}`}
|
||||
>
|
||||
@ -189,7 +189,7 @@ export default function Upload({
|
||||
<span className="bg-white px-2 text-xs text-gray-4000">Name</span>
|
||||
</div>
|
||||
<div {...getRootProps()}>
|
||||
<span className="rounded-3xl border border-blue-2000 px-4 py-2 font-medium text-blue-2000 hover:cursor-pointer">
|
||||
<span className="rounded-3xl border border-purple-30 px-4 py-2 font-medium text-purple-30 hover:cursor-pointer">
|
||||
<input type="button" {...getInputProps()} />
|
||||
Choose Files
|
||||
</span>
|
||||
@ -206,7 +206,7 @@ export default function Upload({
|
||||
<div className="flex flex-row-reverse">
|
||||
<button
|
||||
onClick={uploadFile}
|
||||
className="ml-6 rounded-3xl bg-blue-3000 py-2 px-6 text-white"
|
||||
className="ml-6 rounded-3xl bg-purple-30 py-2 px-6 text-white"
|
||||
>
|
||||
Train
|
||||
</button>
|
||||
|
@ -1,6 +1,7 @@
|
||||
/** @type {import('tailwindcss').Config} */
|
||||
module.exports = {
|
||||
content: ['./index.html', './src/**/*.{js,ts,jsx,tsx}'],
|
||||
darkMode: 'class',
|
||||
theme: {
|
||||
extend: {
|
||||
spacing: {
|
||||
@ -24,6 +25,7 @@ module.exports = {
|
||||
'blue-1000': '#7D54D1',
|
||||
'blue-2000': '#002B49',
|
||||
'blue-3000': '#4B02E2',
|
||||
'purple-30': '#7D54D1',
|
||||
'blue-4000': 'rgba(0, 125, 255, 0.36)',
|
||||
'blue-5000': 'rgba(0, 125, 255)',
|
||||
},
|
||||
|
@ -78,14 +78,12 @@ def ingest(yes: bool = typer.Option(False, "-y", "--yes", prompt=False,
|
||||
# Here we check for command line arguments for bot calls.
|
||||
# If no argument exists or the yes is not True, then the
|
||||
# user permission is requested to call the API.
|
||||
if len(sys.argv) > 1:
|
||||
if yes:
|
||||
call_openai_api(docs, folder_name)
|
||||
else:
|
||||
get_user_permission(docs, folder_name)
|
||||
if len(sys.argv) > 1 and yes:
|
||||
call_openai_api(docs, folder_name)
|
||||
else:
|
||||
get_user_permission(docs, folder_name)
|
||||
|
||||
|
||||
folder_counts = defaultdict(int)
|
||||
folder_names = []
|
||||
for dir_path in dir:
|
||||
@ -110,14 +108,19 @@ def convert(dir: Optional[str] = typer.Option("inputs",
|
||||
Creates documentation linked to original functions from specified location.
|
||||
By default /inputs folder is used, .py is parsed.
|
||||
"""
|
||||
if formats == 'py':
|
||||
functions_dict, classes_dict = extract_py(dir)
|
||||
elif formats == 'js':
|
||||
functions_dict, classes_dict = extract_js(dir)
|
||||
elif formats == 'java':
|
||||
functions_dict, classes_dict = extract_java(dir)
|
||||
# Using a dictionary to map between the formats and their respective extraction functions
|
||||
# makes the code more scalable. When adding more formats in the future,
|
||||
# you only need to update the extraction_functions dictionary.
|
||||
extraction_functions = {
|
||||
'py': extract_py,
|
||||
'js': extract_js,
|
||||
'java': extract_java
|
||||
}
|
||||
|
||||
if formats in extraction_functions:
|
||||
functions_dict, classes_dict = extraction_functions[formats](dir)
|
||||
else:
|
||||
raise Exception("Sorry, language not supported yet")
|
||||
raise Exception("Sorry, language not supported yet")
|
||||
transform_to_docs(functions_dict, classes_dict, formats, dir)
|
||||
|
||||
|
||||
|
@ -57,7 +57,7 @@ class HTMLParser(BaseParser):
|
||||
title_indexes = [i for i, isd_el in enumerate(isd) if isd_el['type'] == 'Title']
|
||||
|
||||
# Creating 'Chunks' - List of lists of strings
|
||||
# each list starting with with isd_el['type'] = 'Title' and all the data till the next 'Title'
|
||||
# each list starting with isd_el['type'] = 'Title' and all the data till the next 'Title'
|
||||
# Each Chunk can be thought of as an individual set of data, which can be sent to the model
|
||||
# Where Each Title is grouped together with the data under it
|
||||
|
||||
|
96
tests/llm/test_sagemaker.py
Normal file
@ -0,0 +1,96 @@
|
||||
# FILEPATH: /path/to/test_sagemaker.py
|
||||
|
||||
import json
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
from application.llm.sagemaker import SagemakerAPILLM, LineIterator
|
||||
|
||||
class TestSagemakerAPILLM(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.sagemaker = SagemakerAPILLM()
|
||||
self.context = "This is the context"
|
||||
self.user_question = "What is the answer?"
|
||||
self.messages = [
|
||||
{"content": self.context},
|
||||
{"content": "Some other message"},
|
||||
{"content": self.user_question}
|
||||
]
|
||||
self.prompt = f"### Instruction \n {self.user_question} \n ### Context \n {self.context} \n ### Answer \n"
|
||||
self.payload = {
|
||||
"inputs": self.prompt,
|
||||
"stream": False,
|
||||
"parameters": {
|
||||
"do_sample": True,
|
||||
"temperature": 0.1,
|
||||
"max_new_tokens": 30,
|
||||
"repetition_penalty": 1.03,
|
||||
"stop": ["</s>", "###"]
|
||||
}
|
||||
}
|
||||
self.payload_stream = {
|
||||
"inputs": self.prompt,
|
||||
"stream": True,
|
||||
"parameters": {
|
||||
"do_sample": True,
|
||||
"temperature": 0.1,
|
||||
"max_new_tokens": 512,
|
||||
"repetition_penalty": 1.03,
|
||||
"stop": ["</s>", "###"]
|
||||
}
|
||||
}
|
||||
self.body_bytes = json.dumps(self.payload).encode('utf-8')
|
||||
self.body_bytes_stream = json.dumps(self.payload_stream).encode('utf-8')
|
||||
self.response = {
|
||||
"Body": MagicMock()
|
||||
}
|
||||
self.result = [
|
||||
{
|
||||
"generated_text": "This is the generated text"
|
||||
}
|
||||
]
|
||||
self.response['Body'].read.return_value.decode.return_value = json.dumps(self.result)
|
||||
|
||||
def test_gen(self):
|
||||
with patch.object(self.sagemaker.runtime, 'invoke_endpoint',
|
||||
return_value=self.response) as mock_invoke_endpoint:
|
||||
output = self.sagemaker.gen(None, None, self.messages)
|
||||
mock_invoke_endpoint.assert_called_once_with(
|
||||
EndpointName=self.sagemaker.endpoint,
|
||||
ContentType='application/json',
|
||||
Body=self.body_bytes
|
||||
)
|
||||
self.assertEqual(output,
|
||||
self.result[0]['generated_text'][len(self.prompt):])
|
||||
|
||||
def test_gen_stream(self):
|
||||
with patch.object(self.sagemaker.runtime, 'invoke_endpoint_with_response_stream',
|
||||
return_value=self.response) as mock_invoke_endpoint:
|
||||
output = list(self.sagemaker.gen_stream(None, None, self.messages))
|
||||
mock_invoke_endpoint.assert_called_once_with(
|
||||
EndpointName=self.sagemaker.endpoint,
|
||||
ContentType='application/json',
|
||||
Body=self.body_bytes_stream
|
||||
)
|
||||
self.assertEqual(output, [])
|
||||
|
||||
class TestLineIterator(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.stream = [
|
||||
{'PayloadPart': {'Bytes': b'{"outputs": [" a"]}\n'}},
|
||||
{'PayloadPart': {'Bytes': b'{"outputs": [" challenging"]}\n'}},
|
||||
{'PayloadPart': {'Bytes': b'{"outputs": [" problem"]}\n'}}
|
||||
]
|
||||
self.line_iterator = LineIterator(self.stream)
|
||||
|
||||
def test_iter(self):
|
||||
self.assertEqual(iter(self.line_iterator), self.line_iterator)
|
||||
|
||||
def test_next(self):
|
||||
self.assertEqual(next(self.line_iterator), b'{"outputs": [" a"]}')
|
||||
self.assertEqual(next(self.line_iterator), b'{"outputs": [" challenging"]}')
|
||||
self.assertEqual(next(self.line_iterator), b'{"outputs": [" problem"]}')
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|