Merge branch 'main' into main

This commit is contained in:
Digvijay Shelar 2023-10-06 22:41:28 +05:30 committed by GitHub
commit 168f4c0056
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
35 changed files with 564 additions and 245 deletions

23
.github/labeler.yml vendored Normal file
View File

@ -0,0 +1,23 @@
repo:
- '*'
github:
- .github/**/*
application:
- application/**/*
docs:
- docs/**/*
extensions:
- extensions/**/*
frontend:
- frontend/**/*
scripts:
- scripts/**/*
tests:
- tests/**/*

15
.github/workflows/labeler.yml vendored Normal file
View File

@ -0,0 +1,15 @@
# https://github.com/actions/labeler
name: Pull Request Labeler
on:
- pull_request_target
jobs:
triage:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v4
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
sync-labels: true

View File

@ -2,7 +2,7 @@
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
We as members, contributors, and leaders, pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
@ -10,20 +10,20 @@ nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
diverse, inclusive, and a healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
Examples of behavior that contribute to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Demonstrating empathy and kindness towards other people
* Being respectful and open to differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Taking accountability and offering apologies to those who have been impacted by our errors,
while also gaining insights from the situation
* Focusing on what is best not just for us as individuals, but for the
overall community
community as a whole
Examples of unacceptable behavior include:
@ -31,7 +31,7 @@ Examples of unacceptable behavior include:
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
* Publishing other's private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
@ -74,7 +74,7 @@ the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
unprofessional or unwelcome in the community space.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
@ -107,7 +107,7 @@ Violating these terms may lead to a permanent ban.
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
individual, or aggression towards or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.

View File

@ -1,6 +1,6 @@
# Welcome to DocsGPT Contributing guideline
# Welcome to DocsGPT Contributing Guidelines
Thank you for choosing this project to contribute to, we are all very grateful!
Thank you for choosing this project to contribute to. We are all very grateful!
### [🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
@ -17,30 +17,36 @@ Thank you for choosing this project to contribute to, we are all very grateful!
## 🐞 Issues and Pull requests
We value contributions to our issues in the form of discussion or suggestion, we recommend that you check out existing issues and our [Roadmap](https://github.com/orgs/arc53/projects/2)
We value contributions to our issues in the form of discussion or suggestions. We recommend that you check out existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2).
If you want to contribute by writing code there are a few things that you should know before doing it:
We have frontend (React, Vite) and Backend (python)
If you want to contribute by writing code, there are a few things that you should know before doing it:
We have a frontend in React (Vite) and backend in Python.
### If you are looking to contribute to frontend (⚛React, Vite):
- The current frontend is being migrated from `/application` to `/frontend` with a new design, so please contribute to the new one.
- Check out this [milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues.
- The Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
### If you are looking to contribute to Frontend (⚛React, Vite):
The current frontend is being migrated from /application to /frontend with a new design, so please contribute to the new one. Check out this [Milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues also [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1)
Please try to follow the guidelines.
### If you are looking to contribute to Backend (🐍Python):
* Check out our issues, and contribute to /application or /scripts (ignore old ingest_rst.py ingest_rst_sphinx.py files, they will be deprecated soon)
* All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [/tests](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
* Before submitting your PR make sure that after you ingested some test data it is queryable.
### If you are looking to contribute to Backend (🐍 Python):
- Check out our issues and contribute to `/application` or `/scripts` (ignore old `ingest_rst.py` `ingest_rst_sphinx.py` files; they will be deprecated soon).
- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
- Before submitting your PR, ensure it is queryable after ingesting some test data.
### Testing
To run unit tests, from the root of the repository execute:
To run unit tests from the root of the repository, execute:
```
python -m pytest
```
### Workflow:
Create a fork, make changes on your forked repository, and submit changes in the form of a pull request.
Create a fork, make changes on your forked repository, and submit changes as a pull request.
## Questions/collaboration
Please join our [Discord](https://discord.gg/n5BX8dh8rU) don't hesitate, we are very friendly and welcoming to new contributors.
Please join our [Discord](https://discord.gg/n5BX8dh8rU). Don't hesitate; we are very friendly and welcoming to new contributors.
# Thank you so much for considering contributing to DocsGPT!🙏

View File

@ -2,13 +2,13 @@
Welcome, contributors! We're excited to announce that DocsGPT is participating in Hacktoberfest. Get involved by submitting a **meaningful** pull request, and earn a free shirt in return!
All contributors with accepted PR's will receive a cool holopin! 🤩 (Watchout for a reply in your PR to collect it)
All contributors with accepted PRs will receive a cool Holopin! 🤩 (Watch out for a reply in your PR to collect it).
📜 Here's How to Contribute:
🛠️ Code: This is the golden ticket! Make meaningful contributions through PRs.
📚 Wiki: Improve our documentation, Create a guide or change existing documentation.
🖥️ Design: Improve the UI/UX, or design a new feature.
🖥️ Design: Improve the UI/UX or design a new feature.
📝 Guidelines for Pull Requests:
@ -16,20 +16,20 @@ Familiarize yourself with the current contributions and our [Roadmap](https://gi
Deciding to contribute with code? Here are some insights based on the area of your interest:
Frontend (⚛React, Vite):
Most of the code is located in /frontend folder. You can also check out our React extension in /extensions/react-widget.
For design references, here's the [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
Ensure you adhere to the established guidelines.
- Frontend (⚛React, Vite):
- Most of the code is located in `/frontend` folder. You can also check out our React extension in /extensions/react-widget.
- For design references, here's the [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
- Ensure you adhere to the established guidelines.
Backend (🐍Python):
Focus on /application or /scripts. However, avoid the files ingest_rst.py and ingest_rst_sphinx.py as they are soon to be deprecated.
Newly added code should come with relevant unit tests (pytest).
Refer to the /tests folder for test suites.
- Backend (🐍Python):
- Focus on `/application` or `/scripts`. However, avoid the files ingest_rst.py and ingest_rst_sphinx.py, as they will soon be deprecated.
- Newly added code should come with relevant unit tests (pytest).
- Refer to the `/tests` folder for test suites.
Check out [Contributing Guidelines](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
Check out our [Contributing Guidelines](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
Once you have Created your PR and it was merged, please fill in this [form](https://airtable.com/appfkqFVjB0RpYCJh/shrXXM98xgRsbjO7s)
Once you have created your PR and our maintainers have merged it, please fill in this [form](https://airtable.com/appfkqFVjB0RpYCJh/shrXXM98xgRsbjO7s).
Don't be shy! Hop into our [Discord](https://discord.gg/n5BX8dh8rU) Server. We're a friendly bunch and eager to assist newcomers.
Feel free to join our Discord server. We're here to help newcomers, so don't hesitate to jump in! [Join us here](https://discord.gg/n5BX8dh8rU).
Big thanks for considering contributing to DocsGPT during Hacktoberfest! 🙏 Your effort can earn you a swanky new t-shirt. 🎁 Let's code together! 🚀
Thank you very much for considering contributing to DocsGPT during Hacktoberfest! 🙏 Your contributions could earn you a stylish new t-shirt as a token of our appreciation. 🎁 Join us, and let's code together! 🚀

View File

@ -18,14 +18,12 @@ Say goodbye to time-consuming manual searches, and let <strong>DocsGPT</strong>
<a href="https://github.com/arc53/DocsGPT">![example2](https://img.shields.io/github/forks/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT/blob/main/LICENSE">![example3](https://img.shields.io/github/license/arc53/docsgpt)</a>
<a href="https://discord.gg/n5BX8dh8rU">![example3](https://img.shields.io/discord/1070046503302877216)</a>
</div>
### Production Support/ Help for companies:
### Production Support / Help for companies:
When deploying your DocsGPT to a live environment, we're eager to provide personalized assistance.
We're eager to provide personalized assistance when deploying your DocsGPT to a live environment.
- [Schedule Demo 👋](https://cal.com/arc53/docsgpt-demo-b2b?date=2023-10-04&month=2023-10)
- [Send Email ✉️](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions)
@ -36,9 +34,9 @@ When deploying your DocsGPT to a live environment, we're eager to provide person
## Roadmap
You can find our [Roadmap](https://github.com/orgs/arc53/projects/2) here. Please don't hesitate to contribute or create issues, it helps us make DocsGPT better!
You can find our roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
## Our Open-Source models optimised for DocsGPT:
## Our Open-Source models optimized for DocsGPT:
| Name | Base Model | Requirements (or similar) |
|-------------------|------------|----------------------------------------------------------|
@ -47,7 +45,7 @@ You can find our [Roadmap](https://github.com/orgs/arc53/projects/2) here. Pleas
| [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's |
If you don't have enough resources to run it you can use bitsnbytes to quantize
If you don't have enough resources to run it, you can use bitsnbytes to quantize.
## Features
@ -56,7 +54,7 @@ If you don't have enough resources to run it you can use bitsnbytes to quantize
## Useful links
<img src="https://github.com/shelar1423/DocsGPT/assets/82649533/4320df71-6764-46a7-b38e-7bd718b931b4" alt="Audit" width="35" height="35" align="center"> [Live preview](https://docsgpt.arc53.com/)
<img src="https://github.com/shelar1423/DocsGPT/assets/82649533/bbd74228-f41f-4194-8fe2-531f7b29d61f" alt="Discord" width="35" height="35" align="center"> [Join Our Discord](https://discord.gg/n5BX8dh8rU)
@ -74,28 +72,28 @@ If you don't have enough resources to run it you can use bitsnbytes to quantize
## Project structure
- Application - Flask app (main application)
- Application - Flask app (main application).
- Extensions - Chrome extension
- Extensions - Chrome extension.
- Scripts - Script that creates similarity search index and store for other libraries.
- Scripts - Script that creates similarity search index and stores for other libraries.
- Frontend - Frontend uses Vite and React
- Frontend - Frontend uses Vite and React.
## QuickStart
Note: Make sure you have Docker installed
On Mac OS or Linux just write:
On Mac OS or Linux, write:
`./setup.sh`
It will install all the dependencies and give you an option to download local model or use OpenAI
It will install all the dependencies and allow you to download the local model or use OpenAI.
Otherwise refer to this Guide:
Otherwise, refer to this Guide:
1. Download and open this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create a .env file in your root directory and set the env variable OPENAI_API_KEY with your OpenAI API key and VITE_API_STREAMING to true or false, depending on if you want streaming answers or not
2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your OpenAI API key and `VITE_API_STREAMING` to true or false, depending on if you want streaming answers or not.
It should look like this inside:
```
@ -103,15 +101,15 @@ Otherwise refer to this Guide:
VITE_API_STREAMING=true
```
See optional environment variables in the `/.env-template` and `/application/.env_sample` files.
3. Run `./run-with-docker-compose.sh`
4. Navigate to http://localhost:5173/
3. Run `./run-with-docker-compose.sh`.
4. Navigate to http://localhost:5173/.
To stop just run Ctrl + C
To stop, just run `Ctrl + C`.
## Development environments
### Spin up mongo and redis
For development only 2 containers are used from docker-compose.yaml (by deleting all services except for Redis and Mongo).
For development, only two containers are used from `docker-compose.yaml` (by deleting all services except for Redis and Mongo).
See file [docker-compose-dev.yaml](./docker-compose-dev.yaml).
Run
@ -124,33 +122,32 @@ docker compose -f docker-compose-dev.yaml up -d
Make sure you have Python 3.10 or 3.11 installed.
1. Export required environment variables or prep .env file in application folder
Prepare .env file
Copy `.env_sample` and create `.env` with your OpenAI API token for the API_KEY and EMBEDDINGS_KEY fields
1. Export required environment variables or prepare a `.env` file in the `/application` folder:
- Copy `.env_sample` and create `.env` with your OpenAI API token for the `API_KEY` and `EMBEDDINGS_KEY` fields.
(check out application/core/settings.py if you want to see more config options)
3. (optional) Create a Python virtual environment
(check out [`application/core/settings.py`](application/core/settings.py) if you want to see more config options.)
2. (optional) Create a Python virtual environment:
```commandline
python -m venv venv
. venv/bin/activate
```
4. Change to `application/` subdir and install dependencies for the backend
3. Change to the `application/` subdir and install dependencies for the backend:
```commandline
pip install -r application/requirements.txt
```
5. Run the app `flask run --host=0.0.0.0 --port=7091`
6. Start worker with `celery -A application.app.celery worker -l INFO`
4. Run the app using `flask run --host=0.0.0.0 --port=7091`.
5. Start worker with `celery -A application.app.celery worker -l INFO`.
### Start frontend
Make sure you have Node version 16 or higher.
1. Navigate to `/frontend` folder
2. Install dependencies
`npm install`
3. Run the app
`npm run dev`
1. Navigate to the `/frontend` folder.
2. Install dependencies by running `npm install`.
3. Run the app using `npm run dev`.
## All Thanks To Our Contributors
## Many Thanks To Our Contributors
<a href="[https://github.com/arc53/DocsGPT/graphs/contributors](https://docsgpt.arc53.com/)">
<img src="https://contrib.rocks/image?repo=arc53/DocsGPT" />
@ -158,4 +155,3 @@ Make sure you have Node version 16 or higher.
Built with [🦜️🔗 LangChain](https://github.com/hwchase17/langchain)

View File

@ -1,68 +1,44 @@
import platform
import dotenv
from application.celery import celery
from flask import Flask, request, redirect
from application.core.settings import settings
from application.api.user.routes import user
from application.api.answer.routes import answer
from application.api.internal.routes import internal
# Redirect PosixPath to WindowsPath on Windows
if platform.system() == "Windows":
import pathlib
temp = pathlib.PosixPath
pathlib.PosixPath = pathlib.WindowsPath
# loading the .env file
dotenv.load_dotenv()
app = Flask(__name__)
app.register_blueprint(user)
app.register_blueprint(answer)
app.register_blueprint(internal)
app.config["UPLOAD_FOLDER"] = UPLOAD_FOLDER = "inputs"
app.config["CELERY_BROKER_URL"] = settings.CELERY_BROKER_URL
app.config["CELERY_RESULT_BACKEND"] = settings.CELERY_RESULT_BACKEND
app.config["MONGO_URI"] = settings.MONGO_URI
app.config.update(
UPLOAD_FOLDER="inputs",
CELERY_BROKER_URL=settings.CELERY_BROKER_URL,
CELERY_RESULT_BACKEND=settings.CELERY_RESULT_BACKEND,
MONGO_URI=settings.MONGO_URI
)
celery.config_from_object("application.celeryconfig")
@app.route("/")
def home():
"""
The frontend source code lives in the /frontend directory of the repository.
"""
if request.remote_addr in ('0.0.0.0', '127.0.0.1', 'localhost', '172.18.0.1'):
# If users locally try to access DocsGPT running in Docker,
# they will be redirected to the Frontend application.
return redirect('http://localhost:5173')
else:
# Handle other cases or render the default page
return 'Welcome to DocsGPT Backend!'
# handling CORS
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add("Access-Control-Allow-Headers", "Content-Type,Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET,PUT,POST,DELETE,OPTIONS")
# response.headers.add("Access-Control-Allow-Credentials", "true")
return response
if __name__ == "__main__":
app.run(debug=True, port=7091)

View File

@ -32,6 +32,12 @@ class Settings(BaseSettings):
ELASTIC_URL: str = None # url for elasticsearch
ELASTIC_INDEX: str = "docsgpt" # index name for elasticsearch
# SageMaker config
SAGEMAKER_ENDPOINT: str = None # SageMaker endpoint name
SAGEMAKER_REGION: str = None # SageMaker region name
SAGEMAKER_ACCESS_KEY: str = None # SageMaker access key
SAGEMAKER_SECRET_KEY: str = None # SageMaker secret key
path = Path(__file__).parent.parent.absolute()
settings = Settings(_env_file=path.joinpath(".env"), _env_file_encoding="utf-8")

View File

@ -1,27 +1,139 @@
from application.llm.base import BaseLLM
from application.core.settings import settings
import requests
import json
import io
class LineIterator:
"""
A helper class for parsing the byte stream input.
The output of the model will be in the following format:
```
b'{"outputs": [" a"]}\n'
b'{"outputs": [" challenging"]}\n'
b'{"outputs": [" problem"]}\n'
...
```
While usually each PayloadPart event from the event stream will contain a byte array
with a full json, this is not guaranteed and some of the json objects may be split across
PayloadPart events. For example:
```
{'PayloadPart': {'Bytes': b'{"outputs": '}}
{'PayloadPart': {'Bytes': b'[" problem"]}\n'}}
```
This class accounts for this by concatenating bytes written via the 'write' function
and then exposing a method which will return lines (ending with a '\n' character) within
the buffer via the 'scan_lines' function. It maintains the position of the last read
position to ensure that previous bytes are not exposed again.
"""
def __init__(self, stream):
self.byte_iterator = iter(stream)
self.buffer = io.BytesIO()
self.read_pos = 0
def __iter__(self):
return self
def __next__(self):
while True:
self.buffer.seek(self.read_pos)
line = self.buffer.readline()
if line and line[-1] == ord('\n'):
self.read_pos += len(line)
return line[:-1]
try:
chunk = next(self.byte_iterator)
except StopIteration:
if self.read_pos < self.buffer.getbuffer().nbytes:
continue
raise
if 'PayloadPart' not in chunk:
print('Unknown event type:' + chunk)
continue
self.buffer.seek(0, io.SEEK_END)
self.buffer.write(chunk['PayloadPart']['Bytes'])
class SagemakerAPILLM(BaseLLM):
def __init__(self, *args, **kwargs):
self.url = settings.SAGEMAKER_API_URL
import boto3
runtime = boto3.client(
'runtime.sagemaker',
aws_access_key_id='xxx',
aws_secret_access_key='xxx',
region_name='us-west-2'
)
self.endpoint = settings.SAGEMAKER_ENDPOINT
self.runtime = runtime
def gen(self, model, engine, messages, stream=False, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
response = requests.post(
url=self.url,
headers={
"Content-Type": "application/json; charset=utf-8",
},
data=json.dumps({"input": prompt})
)
# Construct payload for endpoint
payload = {
"inputs": prompt,
"stream": False,
"parameters": {
"do_sample": True,
"temperature": 0.1,
"max_new_tokens": 30,
"repetition_penalty": 1.03,
"stop": ["</s>", "###"]
}
}
body_bytes = json.dumps(payload).encode('utf-8')
return response.json()['answer']
# Invoke the endpoint
response = self.runtime.invoke_endpoint(EndpointName=self.endpoint,
ContentType='application/json',
Body=body_bytes)
result = json.loads(response['Body'].read().decode())
import sys
print(result[0]['generated_text'], file=sys.stderr)
return result[0]['generated_text'][len(prompt):]
def gen_stream(self, model, engine, messages, stream=True, **kwargs):
raise NotImplementedError("Sagemaker does not support streaming")
context = messages[0]['content']
user_question = messages[-1]['content']
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
# Construct payload for endpoint
payload = {
"inputs": prompt,
"stream": True,
"parameters": {
"do_sample": True,
"temperature": 0.1,
"max_new_tokens": 512,
"repetition_penalty": 1.03,
"stop": ["</s>", "###"]
}
}
body_bytes = json.dumps(payload).encode('utf-8')
# Invoke the endpoint
response = self.runtime.invoke_endpoint_with_response_stream(EndpointName=self.endpoint,
ContentType='application/json',
Body=body_bytes)
#result = json.loads(response['Body'].read().decode())
event_stream = response['Body']
start_json = b'{'
for line in LineIterator(event_stream):
if line != b'' and start_json in line:
#print(line)
data = json.loads(line[line.find(start_json):].decode('utf-8'))
if data['token']['text'] not in ["</s>", "###"]:
print(data['token']['text'],end='')
yield data['token']['text']

View File

@ -41,7 +41,7 @@ Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
kombu==5.2.4
langchain==0.0.263
langchain==0.0.308
loguru==0.6.0
lxml==4.9.2
MarkupSafe==2.1.2
@ -59,7 +59,7 @@ numpy==1.24.2
openai==0.27.8
packaging==23.0
pathos==0.3.0
Pillow==9.4.0
Pillow==10.0.1
pox==0.3.2
ppft==1.7.6.6
prompt-toolkit==3.0.38

View File

@ -1,5 +1,5 @@
from application.vectorstore.base import BaseVectorStore
from langchain import FAISS
from langchain.vectorstores import FAISS
from application.core.settings import settings
class FaissStore(BaseVectorStore):

View File

@ -19,7 +19,6 @@ services:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
- SELF_HOSTED_MODEL=$SELF_HOSTED_MODEL
ports:
- "7091:7091"
volumes:

View File

@ -4,7 +4,7 @@ Here's a step-by-step guide on how to setup an Amazon Lightsail instance to host
## Configuring your instance
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here)
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here).
### 1. Create an account or login to https://lightsail.aws.amazon.com
@ -36,7 +36,7 @@ Your instance will be ready for use a few minutes after being created. To access
#### Clone the repository
A terminal window will pop up, and the first step will be to clone the DocsGPT git repository.
A terminal window will pop up, and the first step will be to clone the DocsGPT git repository:
`git clone https://github.com/arc53/DocsGPT.git`
@ -64,11 +64,11 @@ Enter the following command to access the folder in which DocsGPT docker-compose
#### Prepare the environment
Inside the DocsGPT folder create a .env file and copy the contents of .env_sample into it.
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
`nano .env`
Make sure your .env file looks like this:
Make sure your `.env` file looks like this:
```
OPENAI_API_KEY=(Your OpenAI API key)
@ -103,10 +103,10 @@ Before you are able to access your live instance, you must first enable the port
Open your Lightsail instance and head to "Networking".
Then click on "Add rule" under "IPv4 Firewall", enter 5173 as your port, and hit "Create".
Repeat the process for port 7091.
Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
Repeat the process for port `7091`.
#### Access your instance
Your instance will now be available under your Public IP Address and port 5173. Enjoy!
Your instance will now be available under your Public IP Address and port `5173`. Enjoy!

View File

@ -9,23 +9,23 @@ It will install all the dependencies and give you an option to download the loca
Otherwise, refer to this Guide:
1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create a .env file in your root directory and set your `API_KEY` with your openai api key
3. Run `docker-compose build && docker-compose up`
4. Navigate to `http://localhost:5173/`
1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`.
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI api key](https://platform.openai.com/account/api-keys).
3. Run `docker-compose build && docker-compose up`.
4. Navigate to `http://localhost:5173/`.
To stop just run Ctrl + C
To stop just run `Ctrl + C`.
### Chrome Extension
To install the Chrome extension:
1. In the DocsGPT GitHub repository, click on the "Code" button and select Download ZIP
2. Unzip the downloaded file to a location you can easily access
3. Open the Google Chrome browser and click on the three dots menu (upper right corner)
4. Select "More Tools" and then "Extensions"
5. Turn on the "Developer mode" switch in the top right corner of the Extensions page
6. Click on the "Load unpacked" button
7. Select the "Chrome" folder where the DocsGPT files have been unzipped (docsgpt-main > extensions > chrome)
8. The extension should now be added to Google Chrome and can be managed on the Extensions page
1. In the DocsGPT GitHub repository, click on the "Code" button and select "Download ZIP".
2. Unzip the downloaded file to a location you can easily access.
3. Open the Google Chrome browser and click on the three dots menu (upper right corner).
4. Select "More Tools" and then "Extensions".
5. Turn on the "Developer mode" switch in the top right corner of the Extensions page.
6. Click on the "Load unpacked" button.
7. Select the "Chrome" folder where the DocsGPT files have been unzipped (docsgpt-main > extensions > chrome).
8. The extension should now be added to Google Chrome and can be managed on the Extensions page.
9. To disable or remove the extension, simply turn off the toggle switch on the extension card or click the "Remove" button.

View File

@ -1,8 +1,8 @@
App currently has two main api endpoints:
Currently, the application provides the following main API endpoints:
### /api/answer
Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example
It will receive an answer for a user provided question
It's a POST request that sends a JSON in body with 4 values. It will receive an answer for a user provided question.
Here is a JavaScript fetch example:
```js
// answer (POST http://127.0.0.1:5000/api/answer)
@ -29,8 +29,8 @@ In response you will get a json document like this one:
```
### /api/docs_check
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)
Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
It's a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example:
```js
// answer (POST http://127.0.0.1:5000/api/docs_check)
@ -54,10 +54,10 @@ In response you will get a json document like this one:
### /api/combine
Provides json that tells UI which vectors are available and where they are located with a simple get request
Provides json that tells UI which vectors are available and where they are located with a simple get request.
Respsonse will include:
date, description, docLink, fullName, language, location (local or docshub), model, name, version
Response will include:
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.
Example of json in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
@ -69,15 +69,14 @@ HTML example:
```html
<form action="/api/upload" method="post" enctype="multipart/form-data" class="mt-2">
<input type="file" name="file" class="py-4" id="file-upload">
<input type="text" name="user" value="local" hidden>
<input type="text" name="name" placeholder="Name:">
<button type="submit" class="py-2 px-4 text-white bg-blue-500 rounded-md hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-blue-500">
Upload
</button>
</form>
<input type="file" name="file" class="py-4" id="file-upload">
<input type="text" name="user" value="local" hidden>
<input type="text" name="name" placeholder="Name:">
<button type="submit" class="py-2 px-4 text-white bg-blue-500 rounded-md hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-blue-500">
Upload
</button>
</form>
```
Response:
@ -90,7 +89,7 @@ Response:
```
### /api/task_status
Gets task status (task_id) from /api/upload
Gets task status (`task_id`) from `/api/upload`:
```js
// Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
@ -105,7 +104,7 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
Responses:
There are two types of responses:
1. while task it still running, where "current" will show progress from 0 - 100
1. while task it still running, where "current" will show progress from 0 to 100
```json
{
"result": {
@ -134,7 +133,7 @@ There are two types of responses:
```
### /api/delete_old
deletes old vecotstores
Deletes old vectorstores:
```js
// Task status (GET http://127.0.0.1:5000/api/docs_check)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
@ -146,7 +145,8 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
.then((res) => res.text())
.then(console.log.bind(console))
```
response:
Response:
```json
{ "status": "ok" }

View File

@ -1,9 +1,8 @@
### To start chatwoot extension:
1. Prepare and start the DocsGPT itself (load your documentation too)
Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data
2. Go to chatwoot, Navigate to your profile (bottom left), click on profile settings, scroll to the bottom and copy Access Token
2. Navigate to `/extensions/chatwoot`. Copy .env_sample and create .env file
3. Fill in the values
1. Prepare and start the DocsGPT itself (load your documentation too). Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data.
2. Go to chatwoot, **Navigate** to your profile (bottom left), click on profile settings, scroll to the bottom and copy **Access Token**.
3. Navigate to `/extensions/chatwoot`. Copy `.env_sample` and create `.env` file.
4. Fill in the values.
```
docsgpt_url=<docsgpt_api_url>
@ -12,18 +11,19 @@ docsgpt_key=<openai_api_key or other llm key>
chatwoot_token=<from part 2>
```
4. start with `flask run` command
5. Start with `flask run` command.
If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation
If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation.
### Optional (extra validation)
In app.py uncomment lines 12-13 and 71-75
In `app.py` uncomment lines 12-13 and 71-75
in your .env file add:
in your `.env` file add:
`account_id=(optional) 1 `
```
account_id=(optional) 1
assignee_id=(optional) 1
```
`assignee_id=(optional) 1`
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.

View File

@ -1,7 +1,7 @@
### How to set up react docsGPT widget on your website:
### Installation
Got to your project and install a new dependency: `npm install docsgpt`
Got to your project and install a new dependency: `npm install docsgpt`.
### Usage
Go to your project and in the file where you want to use the widget import it:
@ -14,9 +14,9 @@ import "docsgpt/dist/style.css";
Then you can use it like this: `<DocsGPTWidget />`
DocsGPTWidget takes 3 props:
- `apiHost` - url of your DocsGPT API
- `selectDocs` - documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`)
- `apiKey` - usually its empty
- `apiHost` — url of your DocsGPT API.
- `selectDocs` documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`).
- `apiKey` — usually its empty.
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install you widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:

View File

@ -1,4 +1,4 @@
## To customise a main prompt navigate to `/application/prompt/combine_prompt.txt`
## To customize a main prompt navigate to `/application/prompt/combine_prompt.txt`
You can try editing it to see how the model responds.
You can try editing it to see how the model responses.

View File

@ -3,14 +3,13 @@ This AI can use any documentation, but first it needs to be prepared for similar
![video-example-of-how-to-do-it](https://d3dg1063dc54p9.cloudfront.net/videos/how-to-vectorise.gif)
Start by going to
`/scripts/` folder
Start by going to `/scripts/` folder.
If you open this file you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$
It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$.
You can usually find documentation on github in docs/ folder for most open-source projects.
You can usually find documentation on github in `docs/` folder for most open-source projects.
### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
Name it `inputs/`
@ -36,7 +35,7 @@ It will tell you how much it will cost
Once you run it will use new context that is relevant to your documentation
Make sure you select default in the dropdown in the UI
## Customisation
## Customization
You can learn more about options while running ingest.py by running:
`python ingest.py --help`

View File

@ -1,10 +1,10 @@
Fortunately there are many providers for LLM's and some of them can even be ran locally
There are two models used in the app:
1. Embeddings
2. Text generation
1. Embeddings.
2. Text generation.
By default we use OpenAI's models but if you want to change it or even run it locally, its very simple!
By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple!
### Go to .env file or set environment variables:
@ -18,7 +18,7 @@ By default we use OpenAI's models but if you want to change it or even run it lo
`VITE_API_STREAMING=<true or false (true if using openai, false for all others)>`
You dont need to provide keys if you are happy with users providing theirs, so make sure you set LLM_NAME and EMBEDDINGS_NAME
You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`.
Options:
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon)
@ -27,6 +27,6 @@ EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformer
That's it!
### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and dont want anything to leave your premises.
If you are working with important data and don't want anything to leave your premises.
Make sure you set SELF_HOSTED_MODEL as true in you .env variable and for your LLM_NAME you can use anything that's on Hugging Face
Make sure you set `SELF_HOSTED_MODEL` as true in you `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.

View File

@ -1,9 +1,12 @@
{
"name": "doc-ext",
"version": "0.0.1",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"version": "0.0.1",
"license": "MIT",
"devDependencies": {
"tailwindcss": "^3.2.4"
}
@ -407,10 +410,16 @@
}
},
"node_modules/nanoid": {
"version": "3.3.4",
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.4.tgz",
"integrity": "sha512-MqBkQh/OHTS2egovRtLk45wEyNXwF+cokD+1YPf9u5VfJiRdAiRwB2froX5Co9Rh20xs4siNPm8naNotSD6RBw==",
"version": "3.3.6",
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.6.tgz",
"integrity": "sha512-BGcqMMJuToF7i1rt+2PWSNVnWIkGCU78jBG3RxO/bZlnZPK2Cmi2QaffxGO/2RvWi9sL+FAiRiXMgsyxQ1DIDA==",
"dev": true,
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/ai"
}
],
"bin": {
"nanoid": "bin/nanoid.cjs"
},
@ -470,9 +479,9 @@
}
},
"node_modules/postcss": {
"version": "8.4.21",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.21.tgz",
"integrity": "sha512-tP7u/Sn/dVxK2NnruI4H9BG+x+Wxz6oeZ1cJ8P6G/PZY0IKk4k/63TDsQf2kQq3+qoJeLm2kIBUNlZe3zgb4Zg==",
"version": "8.4.31",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
"dev": true,
"funding": [
{
@ -482,10 +491,14 @@
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/postcss"
},
{
"type": "github",
"url": "https://github.com/sponsors/ai"
}
],
"dependencies": {
"nanoid": "^3.3.4",
"nanoid": "^3.3.6",
"picocolors": "^1.0.0",
"source-map-js": "^1.0.2"
},
@ -1094,9 +1107,9 @@
"dev": true
},
"nanoid": {
"version": "3.3.4",
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.4.tgz",
"integrity": "sha512-MqBkQh/OHTS2egovRtLk45wEyNXwF+cokD+1YPf9u5VfJiRdAiRwB2froX5Co9Rh20xs4siNPm8naNotSD6RBw==",
"version": "3.3.6",
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.6.tgz",
"integrity": "sha512-BGcqMMJuToF7i1rt+2PWSNVnWIkGCU78jBG3RxO/bZlnZPK2Cmi2QaffxGO/2RvWi9sL+FAiRiXMgsyxQ1DIDA==",
"dev": true
},
"normalize-path": {
@ -1136,12 +1149,12 @@
"dev": true
},
"postcss": {
"version": "8.4.21",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.21.tgz",
"integrity": "sha512-tP7u/Sn/dVxK2NnruI4H9BG+x+Wxz6oeZ1cJ8P6G/PZY0IKk4k/63TDsQf2kQq3+qoJeLm2kIBUNlZe3zgb4Zg==",
"version": "8.4.31",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
"dev": true,
"requires": {
"nanoid": "^3.3.4",
"nanoid": "^3.3.6",
"picocolors": "^1.0.0",
"source-map-js": "^1.0.2"
}

View File

@ -1,12 +1,12 @@
{
"name": "docsgpt",
"version": "0.2.3",
"version": "0.2.4",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "docsgpt",
"version": "0.2.3",
"version": "0.2.4",
"license": "Apache-2.0",
"dependencies": {
"postcss-cli": "^10.1.0",
@ -19,7 +19,7 @@
"@types/react-dom": "^18.0.9",
"@vitejs/plugin-react-swc": "^3.0.0",
"autoprefixer": "^10.4.13",
"postcss": "^8.4.20",
"postcss": "^8.4.31",
"typescript": "^4.9.3",
"vite": "^4.0.0",
"vite-plugin-dts": "^1.7.1"
@ -1784,9 +1784,9 @@
}
},
"node_modules/postcss": {
"version": "8.4.29",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.29.tgz",
"integrity": "sha512-cbI+jaqIeu/VGqXEarWkRCCffhjgXc0qjBtXpqJhTBohMUjUQnbBr0xqX3vEKudc4iviTewcJo5ajcec5+wdJw==",
"version": "8.4.31",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
"funding": [
{
"type": "opencollective",

View File

@ -38,7 +38,7 @@
"@types/react-dom": "^18.0.9",
"@vitejs/plugin-react-swc": "^3.0.0",
"autoprefixer": "^10.4.13",
"postcss": "^8.4.20",
"postcss": "^8.4.31",
"typescript": "^4.9.3",
"vite": "^4.0.0",
"vite-plugin-dts": "^1.7.1"

View File

@ -620,9 +620,9 @@
}
},
"node_modules/postcss": {
"version": "8.4.23",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.23.tgz",
"integrity": "sha512-bQ3qMcpF6A/YjR55xtoTr0jGOlnPOKAIMdOWiv0EIT6HVPEaJiJB4NLljSbiHoC2RX7DN5Uvjtpbg1NPdwv1oA==",
"version": "8.4.31",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
"dev": true,
"funding": [
{

7
frontend/public/lock.svg Normal file
View File

@ -0,0 +1,7 @@
<svg width="33" height="33" viewBox="0 0 33 33" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M8.25 13.75V11C8.25 6.44875 9.625 2.75 16.5 2.75C23.375 2.75 24.75 6.44875 24.75 11V13.75" stroke="#3A363E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M23.375 30.25H9.625C4.125 30.25 2.75 28.875 2.75 23.375V20.625C2.75 15.125 4.125 13.75 9.625 13.75H23.375C28.875 13.75 30.25 15.125 30.25 20.625V23.375C30.25 28.875 28.875 30.25 23.375 30.25Z" stroke="#3A363E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M21.9951 22H22.0075" stroke="#3A363E" stroke-width="3" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M16.4938 22H16.5061" stroke="#3A363E" stroke-width="3" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M10.9924 22H11.0048" stroke="#3A363E" stroke-width="3" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 910 B

View File

@ -0,0 +1,6 @@
<svg width="36" height="36" viewBox="0 0 36 36" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12.75 28.455H12C6 28.455 3 26.955 3 19.455V11.955C3 5.95496 6 2.95496 12 2.95496H24C30 2.95496 33 5.95496 33 11.955V19.455C33 25.455 30 28.455 24 28.455H23.25C22.785 28.455 22.335 28.68 22.05 29.055L19.8 32.055C18.81 33.375 17.19 33.375 16.2 32.055L13.95 29.055C13.71 28.725 13.17 28.455 12.75 28.455Z" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 13.05L9 16.05L12 19.05" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M24 13.05L27 16.05L24 19.05" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M19.5 12.5549L16.5 19.545" stroke="#3E3434" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 951 B

View File

@ -0,0 +1,5 @@
<svg width="29" height="29" viewBox="0 0 29 29" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.2708 22.9583H9.66663C4.83329 22.9583 2.41663 21.75 2.41663 15.7083V9.66663C2.41663 4.83329 4.83329 2.41663 9.66663 2.41663H19.3333C24.1666 2.41663 26.5833 4.83329 26.5833 9.66663V15.7083C26.5833 20.5416 24.1666 22.9583 19.3333 22.9583H18.7291C18.3545 22.9583 17.992 23.1395 17.7625 23.4416L15.95 25.8583C15.1525 26.9216 13.8475 26.9216 13.05 25.8583L11.2375 23.4416C11.0441 23.1758 10.597 22.9583 10.2708 22.9583Z" stroke="#363A3F" stroke-width="2" stroke-miterlimit="10" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M8.45837 9.66663H20.5417" stroke="#363A3F" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M8.45837 15.7084H15.7084" stroke="#363A3F" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 873 B

View File

@ -1,7 +1,7 @@
export default function Hero({ className = '' }: { className?: string }) {
return (
<div className={`flex flex-col ${className}`}>
<div className="mb-10 flex items-center justify-center">
<div className={`mt-14 mb-12 flex flex-col `}>
<div className="mb-10 flex items-center justify-center ">
<p className="mr-2 text-4xl font-semibold">DocsGPT</p>
<p className="text-[27px]">🦖</p>
</div>
@ -17,6 +17,53 @@ export default function Hero({ className = '' }: { className?: string }) {
Start by entering your query in the input field below and we will do the
rest!
</p>
<div className="sections mt-1 flex flex-wrap items-center justify-center gap-1 sm:gap-1 md:gap-0 ">
<div className=" rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/70 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tr-none md:rounded-br-none">
<div className="h-full rounded-[45px] bg-white p-6 md:rounded-tr-none md:rounded-br-none">
<img
src="/message-text.svg"
alt="lock"
className="h-[24px] w-[24px]"
/>
<h2 className="mt-2 mb-3 text-lg font-bold">Chat with Your Data</h2>
<p className="w-[250px] text-xs text-gray-500">
DocsGPT will use your data to answer questions. Whether its
documentation, source code, or Microsoft files, DocsGPT allows you
to have interactive conversations and find answers based on the
provided data.
</p>
</div>
</div>
<div className=" rounded-[50px] bg-gradient-to-r from-[#6EE7B7]/70 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-none md:py-1 md:px-0">
<div className="rounded-[45px] bg-white px-6 py-4 md:rounded-none">
<img src="/lock.svg" alt="lock" className="h-[24px] w-[24px]" />
<h2 className="mt-2 mb-3 text-lg font-bold">Secure Data Storage</h2>
<p className=" w-[250px] text-xs text-gray-500">
The security of your data is our top priority. DocsGPT ensures the
utmost protection for your sensitive information. With secure data
storage and privacy measures in place, you can trust that your
data is kept safe and confidential.
</p>
</div>
</div>
<div className=" rounded-[50px] bg-gradient-to-l from-[#6EE7B7]/80 via-[#3B82F6] to-[#9333EA]/50 p-1 md:rounded-tl-none md:rounded-bl-none">
<div className="rounded-[45px] bg-white px-6 p-6 lg:rounded-tl-none lg:rounded-bl-none">
<img
src="/message-programming.svg"
alt="lock"
className="h-[24px] w-[24px]"
/>
<h2 className="mt-2 mb-3 text-lg font-bold">Open Source Code</h2>
<p className=" w-[250px] text-xs text-gray-500">
DocsGPT is built on open source principles, promoting transparency
and collaboration. The source code is freely available, enabling
developers to contribute, enhance, and customize the app to meet
their specific needs.
</p>
</div>
</div>
</div>
</div>
);
}

View File

@ -8,6 +8,8 @@ import Hamburger from './assets/hamburger.svg';
import Key from './assets/key.svg';
import Info from './assets/info.svg';
import Link from './assets/link.svg';
import Discord from './assets/discord.svg';
import Github from './assets/github.svg';
import UploadIcon from './assets/upload.svg';
import { ActiveState } from './models/misc';
import APIKeyModal from './preferences/APIKeyModal';
@ -225,11 +227,11 @@ export default function Navigation() {
<div className="flex flex-col-reverse border-b-2">
<div className="relative my-4 flex gap-2 px-2">
<div
className="flex h-12 w-full cursor-pointer justify-between rounded-3xl rounded-md border-2 bg-white"
className="flex h-12 min-w-[85%] cursor-pointer justify-between rounded-3xl rounded-md border-2 bg-white"
onClick={() => setIsDocsListOpen(!isDocsListOpen)}
>
{selectedDocs && (
<p className="my-3 mx-4">
<p className="my-3 mx-4 overflow-hidden text-ellipsis whitespace-nowrap">
{selectedDocs.name} {selectedDocs.version}
</p>
)}
@ -323,26 +325,26 @@ export default function Navigation() {
<img src={Link} alt="link" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">Documentation</p>
</a>
<a
href="https://discord.gg/WHJdfbQDR4"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
>
<img src={Link} alt="link" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">Discord</p>
</a>
<a
href="https://github.com/arc53/DocsGPT"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
>
<img src={Link} alt="link" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">Github</p>
</a>
<div className="border-t-2">
<a
href="https://discord.gg/WHJdfbQDR4"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
>
<img src={Discord} alt="link" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">Visit our Discord</p>
</a>
<a
href="https://github.com/arc53/DocsGPT"
target="_blank"
rel="noreferrer"
className="my-auto mx-4 flex h-12 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100"
>
<img src={Github} alt="link" className="ml-2 w-5" />
<p className="my-auto text-eerie-black">Visit our GitHub</p>
</a>
</div>
</div>
</div>
<div className="fixed h-16 w-full border-b-2 bg-gray-50 md:hidden">

View File

@ -0,0 +1 @@
<svg width="256px" height="256px" viewBox="0 -28.5 256 256" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" preserveAspectRatio="xMidYMid" fill="#000000"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"> <g> <path d="M216.856339,16.5966031 C200.285002,8.84328665 182.566144,3.2084988 164.041564,0 C161.766523,4.11318106 159.108624,9.64549908 157.276099,14.0464379 C137.583995,11.0849896 118.072967,11.0849896 98.7430163,14.0464379 C96.9108417,9.64549908 94.1925838,4.11318106 91.8971895,0 C73.3526068,3.2084988 55.6133949,8.86399117 39.0420583,16.6376612 C5.61752293,67.146514 -3.4433191,116.400813 1.08711069,164.955721 C23.2560196,181.510915 44.7403634,191.567697 65.8621325,198.148576 C71.0772151,190.971126 75.7283628,183.341335 79.7352139,175.300261 C72.104019,172.400575 64.7949724,168.822202 57.8887866,164.667963 C59.7209612,163.310589 61.5131304,161.891452 63.2445898,160.431257 C105.36741,180.133187 151.134928,180.133187 192.754523,160.431257 C194.506336,161.891452 196.298154,163.310589 198.110326,164.667963 C191.183787,168.842556 183.854737,172.420929 176.223542,175.320965 C180.230393,183.341335 184.861538,190.991831 190.096624,198.16893 C211.238746,191.588051 232.743023,181.531619 254.911949,164.955721 C260.227747,108.668201 245.831087,59.8662432 216.856339,16.5966031 Z M85.4738752,135.09489 C72.8290281,135.09489 62.4592217,123.290155 62.4592217,108.914901 C62.4592217,94.5396472 72.607595,82.7145587 85.4738752,82.7145587 C98.3405064,82.7145587 108.709962,94.5189427 108.488529,108.914901 C108.508531,123.290155 98.3405064,135.09489 85.4738752,135.09489 Z M170.525237,135.09489 C157.88039,135.09489 147.510584,123.290155 147.510584,108.914901 C147.510584,94.5396472 157.658606,82.7145587 170.525237,82.7145587 C183.391518,82.7145587 193.761324,94.5189427 193.539891,108.914901 C193.539891,123.290155 183.391518,135.09489 170.525237,135.09489 Z" fill="#61626b" fill-rule="nonzero"> </path> </g> </g></svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" height="1em" viewBox="0 0 448 512"><!--! Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license (Commercial License) Copyright 2023 Fonticons, Inc. --><style>svg{fill:#4c4d52}</style><path d="M400 32H48C21.5 32 0 53.5 0 80v352c0 26.5 21.5 48 48 48h352c26.5 0 48-21.5 48-48V80c0-26.5-21.5-48-48-48zM277.3 415.7c-8.4 1.5-11.5-3.7-11.5-8 0-5.4.2-33 .2-55.3 0-15.6-5.2-25.5-11.3-30.7 37-4.1 76-9.2 76-73.1 0-18.2-6.5-27.3-17.1-39 1.7-4.3 7.4-22-1.7-45-13.9-4.3-45.7 17.9-45.7 17.9-13.2-3.7-27.5-5.6-41.6-5.6-14.1 0-28.4 1.9-41.6 5.6 0 0-31.8-22.2-45.7-17.9-9.1 22.9-3.5 40.6-1.7 45-10.6 11.7-15.6 20.8-15.6 39 0 63.6 37.3 69 74.3 73.1-4.8 4.3-9.1 11.7-10.6 22.3-9.5 4.3-33.8 11.7-48.3-13.9-9.1-15.8-25.5-17.1-25.5-17.1-16.2-.2-1.1 10.2-1.1 10.2 10.8 5 18.4 24.2 18.4 24.2 9.7 29.7 56.1 19.7 56.1 19.7 0 13.9.2 36.5.2 40.6 0 4.3-3 9.5-11.5 8-66-22.1-112.2-84.9-112.2-158.3 0-91.8 70.2-161.5 162-161.5S388 165.6 388 257.4c.1 73.4-44.7 136.3-110.7 158.3zm-98.1-61.1c-1.9.4-3.7-.4-3.9-1.7-.2-1.5 1.1-2.8 3-3.2 1.9-.2 3.7.6 3.9 1.9.3 1.3-1 2.6-3 3zm-9.5-.9c0 1.3-1.5 2.4-3.5 2.4-2.2.2-3.7-.9-3.7-2.4 0-1.3 1.5-2.4 3.5-2.4 1.9-.2 3.7.9 3.7 2.4zm-13.7-1.1c-.4 1.3-2.4 1.9-4.1 1.3-1.9-.4-3.2-1.9-2.8-3.2.4-1.3 2.4-1.9 4.1-1.5 2 .6 3.3 2.1 2.8 3.4zm-12.3-5.4c-.9 1.1-2.8.9-4.3-.6-1.5-1.3-1.9-3.2-.9-4.1.9-1.1 2.8-.9 4.3.6 1.3 1.3 1.8 3.3.9 4.1zm-9.1-9.1c-.9.6-2.6 0-3.7-1.5s-1.1-3.2 0-3.9c1.1-.9 2.8-.2 3.7 1.3 1.1 1.5 1.1 3.3 0 4.1zm-6.5-9.7c-.9.9-2.4.4-3.5-.6-1.1-1.3-1.3-2.8-.4-3.5.9-.9 2.4-.4 3.5.6 1.1 1.3 1.3 2.8.4 3.5zm-6.7-7.4c-.4.9-1.7 1.1-2.8.4-1.3-.6-1.9-1.7-1.5-2.6.4-.6 1.5-.9 2.8-.4 1.3.7 1.9 1.8 1.5 2.6z"/></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@ -113,7 +113,7 @@ export default function Conversation() {
};
return (
<div className="flex justify-center p-4">
<div className="flex flex-col justify-center p-4 md:flex-row">
{queries.length > 0 && !hasScrolledToLast ? (
<button
onClick={scrollIntoView}
@ -146,11 +146,14 @@ export default function Conversation() {
})}
</div>
)}
{queries.length === 0 && <Hero className="mt-24 md:mt-52"></Hero>}
<div className="fixed bottom-0 flex w-10/12 flex-col items-end self-center md:w-[50%]">
<div className="flex w-full">
{queries.length === 0 && (
<Hero className="mt-24 h-[100vh] md:mt-52"></Hero>
)}
<div className="relative bottom-0 flex w-10/12 flex-col items-end self-center md:fixed md:w-[50%]">
<div className="flex h-full w-full">
<div
ref={inputRef}
placeholder="Type your message here..."
contentEditable
onPaste={handlePaste}
className={`border-000000 overflow-x-hidden; max-h-24 min-h-[2.6rem] w-full overflow-y-auto whitespace-pre-wrap rounded-xl border bg-white py-2 pl-4 pr-9 leading-7 opacity-100 focus:outline-none`}

View File

@ -358,3 +358,9 @@ template {
[hidden] {
display: none;
}
[contentEditable]:empty:before {
content: attr(placeholder);
color: #9ca3af;
opacity: 1;
}

View File

@ -47,7 +47,7 @@ javalang==0.13.0
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.3.1
langchain==0.0.252
langchain==0.0.308
lxml==4.9.3
manifest-ml==0.1.8
MarkupSafe==2.1.3

View File

@ -0,0 +1,96 @@
# FILEPATH: /path/to/test_sagemaker.py
import json
import unittest
from unittest.mock import MagicMock, patch
from application.llm.sagemaker import SagemakerAPILLM, LineIterator
class TestSagemakerAPILLM(unittest.TestCase):
def setUp(self):
self.sagemaker = SagemakerAPILLM()
self.context = "This is the context"
self.user_question = "What is the answer?"
self.messages = [
{"content": self.context},
{"content": "Some other message"},
{"content": self.user_question}
]
self.prompt = f"### Instruction \n {self.user_question} \n ### Context \n {self.context} \n ### Answer \n"
self.payload = {
"inputs": self.prompt,
"stream": False,
"parameters": {
"do_sample": True,
"temperature": 0.1,
"max_new_tokens": 30,
"repetition_penalty": 1.03,
"stop": ["</s>", "###"]
}
}
self.payload_stream = {
"inputs": self.prompt,
"stream": True,
"parameters": {
"do_sample": True,
"temperature": 0.1,
"max_new_tokens": 512,
"repetition_penalty": 1.03,
"stop": ["</s>", "###"]
}
}
self.body_bytes = json.dumps(self.payload).encode('utf-8')
self.body_bytes_stream = json.dumps(self.payload_stream).encode('utf-8')
self.response = {
"Body": MagicMock()
}
self.result = [
{
"generated_text": "This is the generated text"
}
]
self.response['Body'].read.return_value.decode.return_value = json.dumps(self.result)
def test_gen(self):
with patch.object(self.sagemaker.runtime, 'invoke_endpoint',
return_value=self.response) as mock_invoke_endpoint:
output = self.sagemaker.gen(None, None, self.messages)
mock_invoke_endpoint.assert_called_once_with(
EndpointName=self.sagemaker.endpoint,
ContentType='application/json',
Body=self.body_bytes
)
self.assertEqual(output,
self.result[0]['generated_text'][len(self.prompt):])
def test_gen_stream(self):
with patch.object(self.sagemaker.runtime, 'invoke_endpoint_with_response_stream',
return_value=self.response) as mock_invoke_endpoint:
output = list(self.sagemaker.gen_stream(None, None, self.messages))
mock_invoke_endpoint.assert_called_once_with(
EndpointName=self.sagemaker.endpoint,
ContentType='application/json',
Body=self.body_bytes_stream
)
self.assertEqual(output, [])
class TestLineIterator(unittest.TestCase):
def setUp(self):
self.stream = [
{'PayloadPart': {'Bytes': b'{"outputs": [" a"]}\n'}},
{'PayloadPart': {'Bytes': b'{"outputs": [" challenging"]}\n'}},
{'PayloadPart': {'Bytes': b'{"outputs": [" problem"]}\n'}}
]
self.line_iterator = LineIterator(self.stream)
def test_iter(self):
self.assertEqual(iter(self.line_iterator), self.line_iterator)
def test_next(self):
self.assertEqual(next(self.line_iterator), b'{"outputs": [" a"]}')
self.assertEqual(next(self.line_iterator), b'{"outputs": [" challenging"]}')
self.assertEqual(next(self.line_iterator), b'{"outputs": [" problem"]}')
if __name__ == '__main__':
unittest.main()