Merge branch 'main' into patch-2

pull/643/head
KRISH SONI 12 months ago committed by GitHub
commit e9323ba2ec
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -84,6 +84,19 @@ def api_feedback():
)
return {"status": http.client.responses.get(response.status_code, "ok")}
@user.route("/api/delete_by_ids", methods=["get"])
def delete_by_ids():
"""Delete by ID. These are the IDs in the vectorstore"""
ids = request.args.get("path")
if not ids:
return {"status": "error"}
if settings.VECTOR_STORE == "faiss":
result = vectors_collection.delete_index(ids=ids)
if result:
return {"status": "ok"}
return {"status": "error"}
@user.route("/api/delete_old", methods=["get"])
def delete_old():

@ -27,6 +27,9 @@ class FaissStore(BaseVectorStore):
def save_local(self, *args, **kwargs):
return self.docsearch.save_local(*args, **kwargs)
def delete_index(self, *args, **kwargs):
return self.docsearch.delete(*args, **kwargs)
def assert_embedding_dimensions(self, embeddings):
"""
Check that the word embedding dimension of the docsearch index matches
@ -40,5 +43,4 @@ class FaissStore(BaseVectorStore):
docsearch_index_dimension = self.docsearch.index.d
if word_embedding_dimension != docsearch_index_dimension:
raise ValueError(f"word_embedding_dimension ({word_embedding_dimension}) " +
f"!= docsearch_index_word_embedding_dimension ({docsearch_index_dimension})")
f"!= docsearch_index_word_embedding_dimension ({docsearch_index_dimension})")

@ -102,3 +102,7 @@ Repeat the process for port `7091`.
#### Access your instance
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!
## Other Deployment Options
- [Deploy DocsGPT on Civo Compute Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c)

@ -1,9 +1,25 @@
Currently, the application provides the following main API endpoints:
# API Endpoints Documentation
### /api/answer
It's a POST request that sends a JSON in body with 4 values. It will receive an answer for a user provided question.
Here is a JavaScript fetch example:
*Currently, the application provides the following main API endpoints:*
### 1. /api/answer
**Description:**
This endpoint is used to request answers to user-provided questions.
**Request:**
Method: POST
Headers: Content-Type should be set to "application/json; charset=utf-8"
Request Body: JSON object with the following fields:
* **question:** The user's question
* **history:** (Optional) Previous conversation history
* **api_key:** Your API key
* **embeddings_key:** Your embeddings key
* **active_docs:** The location of active documentation
Here is a JavaScript Fetch Request example:
```js
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
@ -18,8 +34,9 @@ fetch("http://127.0.0.1:5000/api/answer", {
.then(console.log.bind(console))
```
In response, you will get a JSON document like this one:
**Response**
In response, you will get a JSON document containing the answer,query and the result:
```json
{
"answer": " Hi there! How can I help you?\n",
@ -28,10 +45,17 @@ In response, you will get a JSON document like this one:
}
```
### /api/docs_check
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
It's a POST request that sends a JSON in a body with 1 value. Here is a JavaScript fetch example:
### 2. /api/docs_check
**Description:**
This endpoint will make sure documentation is loaded on the server (just run it every time user is switching between libraries (documentations)).
**Request:**
Headers: Content-Type should be set to "application/json; charset=utf-8"
Request Body: JSON object with the field:
* **docs:** The location of the documentation
```js
// answer (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
@ -45,7 +69,9 @@ fetch("http://127.0.0.1:5000/api/docs_check", {
.then(console.log.bind(console))
```
In response, you will get a JSON document like this one:
**Response:**
In response, you will get a JSON document like this one indicating whether the documentation exists or not.:
```json
{
"status": "exists"
@ -53,18 +79,36 @@ In response, you will get a JSON document like this one:
```
### /api/combine
Provides JSON that tells UI which vectors are available and where they are located with a simple get request.
### 3. /api/combine
**Description:**
This endpoint provides information about available vectors and their locations with a simple GET request.
**Request:**
Method: GET
**Response:**
Response will include:
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.
Example of JSON in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
### /api/upload
Uploads file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress
### 4. /api/upload
**Description:**
This endpoint is used to upload a file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress.
**Request:**
Method: POST
Request Body: A multipart/form-data form with file upload and additional fields, including "user" and "name."
HTML example:
```html
@ -79,20 +123,24 @@ HTML example:
</form>
```
Response:
```json
{
"status": "ok",
"task_id": "b2684988-9047-428b-bd47-08518679103c"
}
**Response:**
JSON response with a status and a task ID that can be used to check the task's progress.
```
### /api/task_status
Gets task status (`task_id`) from `/api/upload`:
### 5. /api/task_status
**Description:**
This endpoint is used to get the status of a task (`task_id`) from `/api/upload`
**Request:**
Method: GET
Query Parameter: task_id (task ID to check)
**Sample JavaScript Fetch Request:**
```js
// Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
fetch("http://localhost:5001/api/task_status?task_id=YOUR_TASK_ID", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
@ -102,7 +150,8 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
.then(console.log.bind(console))
```
Responses:
**Response:**
There are two types of responses:
1. While the task is still running, the 'current' value will show progress from 0 to 100.
@ -134,9 +183,14 @@ There are two types of responses:
}
```
### /api/delete_old
Deletes old Vector Stores:
### 6. /api/delete_old
**Description:**
This endpoint is used to delete old Vector Stores.
**Request:**
Method: GET
```js
// Task status (GET http://127.0.0.1:5000/api/docs_check)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
@ -148,8 +202,10 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
.then((res) => res.text())
.then(console.log.bind(console))
Response:
```
**Response:**
JSON response indicating the status of the operation.
```json
{ "status": "ok" }
```

@ -1,4 +1,27 @@
## To customize a main prompt, navigate to `/application/prompt/combine_prompt.txt`
# Customizing the Main Prompt
You can try editing it to see how the model responses.
To customize the main prompt for DocsGPT, follow these steps:
1. Navigate to `/application/prompt/combine_prompt.txt`.
2. Edit the `combine_prompt.txt` file to modify the prompt text. You can experiment with different phrasings and structures to see how the model responds.
## Example Prompt Modification
**Original Prompt:**
```markdown
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
Use the following pieces of context to help answer the users question. If its not relevant to the question, provide friendly responses.
You have access to chat history, and can use it to help answer the question.
When using code examples, use the following format:
(code)
{summaries}
```
## Conclusion
Customizing the main prompt for DocsGPT allows you to tailor the AI's responses to your unique requirements. Whether you need in-depth explanations, code examples, or specific insights, you can achieve it by modifying the main prompt. Remember to experiment and fine-tune your prompts to get the best results.

@ -12,28 +12,28 @@ It currently uses OPEN_AI to create the vector store, so make sure your document
You can usually find documentation on Github in `docs/` folder for most open-source projects.
### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
- Name it `inputs/`
- Put all your .rst/.md files in there
- The search is recursive, so you don't need to flatten them
- Name it `inputs/`.
- Put all your .rst/.md files in there.
- The search is recursive, so you don't need to flatten them.
If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script)
If there are no .rst/.md files just convert whatever you find to .txt file and feed it. (don't forget to change the extension in script)
### 2. Create .env file in `scripts/` folder
And write your OpenAI API key inside
`OPENAI_API_KEY=<your-api-key>`
`OPENAI_API_KEY=<your-api-key>`.
### 3. Run scripts/ingest.py
`python ingest.py ingest`
It will tell you how much it will cost
It will tell you how much it will cost.
### 4. Move `index.faiss` and `index.pkl` generated in `scripts/output` to `application/` folder.
### 5. Run web app
Once you run it will use new context that is relevant to your documentation
Make sure you select default in the dropdown in the UI
Once you run it will use new context that is relevant to your documentation.
Make sure you select default in the dropdown in the UI.
## Customization
You can learn more about options while running ingest.py by running:

@ -1,10 +1,10 @@
Fortunately, there are many providers for LLM's and some of them can even be run locally
Fortunately, there are many providers for LLMs, and some of them can even be run locally.
There are two models used in the app:
1. Embeddings.
2. Text generation.
By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple!
By default, we use OpenAI's models, but if you want to change it or even run it locally, it's very simple!
### Go to .env file or set environment variables:
@ -31,6 +31,6 @@ Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choo
That's it!
### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and don't want anything to leave your premises.
If you are working with critical data and don't want anything to leave your premises.
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable, and for your `LLM_NAME`, you can use anything that is on Hugging Face.

@ -255,7 +255,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
src={Arrow2}
alt="arrow"
className={`${
isDocsListOpen ? 'rotate-0' : 'rotate-180'
!isDocsListOpen ? 'rotate-0' : 'rotate-180'
} ml-auto mr-3 w-3 transition-all`}
/>
</div>
@ -362,7 +362,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
</a>
</div>
</div>
<div className="fixed h-16 w-full border-b-2 bg-gray-50 md:hidden">
<div className="fixed z-10 h-16 w-full border-b-2 bg-gray-50 md:hidden">
<button
className="mt-5 ml-6 h-6 w-6 md:hidden"
onClick={() => setNavOpen(true)}

@ -0,0 +1,11 @@
.list p {
display: inline;
}
.list li:not(:first-child) {
margin-top: 1em;
}
.list li > .list {
margin-top: 1em;
}

@ -1,6 +1,7 @@
import { forwardRef, useState } from 'react';
import Avatar from '../Avatar';
import { FEEDBACK, MESSAGE_TYPE } from './conversationModels';
import classes from './ConversationBubble.module.css';
import Alert from './../assets/alert.svg';
import { ReactComponent as Like } from './../assets/like.svg';
import { ReactComponent as Dislike } from './../assets/dislike.svg';
@ -27,7 +28,6 @@ const ConversationBubble = forwardRef<
{ message, type, className, feedback, handleFeedback, sources },
ref,
) {
const [showFeedback, setShowFeedback] = useState(false);
const [openSource, setOpenSource] = useState<number | null>(null);
const [copied, setCopied] = useState(false);
@ -40,16 +40,6 @@ const ConversationBubble = forwardRef<
}, 2000);
};
const List = ({
ordered,
children,
}: {
ordered?: boolean;
children: React.ReactNode;
}) => {
const Tag = ordered ? 'ol' : 'ul';
return <Tag className="list-inside list-disc">{children}</Tag>;
};
let bubble;
if (type === 'QUESTION') {
@ -65,12 +55,7 @@ const ConversationBubble = forwardRef<
);
} else {
bubble = (
<div
ref={ref}
className={`flex self-start ${className} flex-col`}
onMouseEnter={() => setShowFeedback(true)}
onMouseLeave={() => setShowFeedback(false)}
>
<div ref={ref} className={`flex self-start ${className} group flex-col`}>
<div className="flex self-start">
<Avatar className="mt-2 text-2xl" avatar="🦖"></Avatar>
<div
@ -104,11 +89,23 @@ const ConversationBubble = forwardRef<
</code>
);
},
ul({ node, children }) {
return <List>{children}</List>;
ul({ children }) {
return (
<ul
className={`list-inside list-disc whitespace-normal pl-4 ${classes.list}`}
>
{children}
</ul>
);
},
ol({ node, children }) {
return <List ordered>{children}</List>;
ol({ children }) {
return (
<ol
className={`list-inside list-decimal whitespace-normal pl-4 ${classes.list}`}
>
{children}
</ol>
);
},
}}
>
@ -118,9 +115,7 @@ const ConversationBubble = forwardRef<
<>
<span className="mt-3 h-px w-full bg-[#DEDEDE]"></span>
<div className="mt-3 flex w-full flex-row flex-wrap items-center justify-start gap-2">
<div className="py-1 text-base font-semibold">
Sources:
</div>
<div className="py-1 text-base font-semibold">Sources:</div>
<div className="flex flex-row flex-wrap items-center justify-start gap-2">
{sources?.map((source, index) => (
<div
@ -151,8 +146,8 @@ const ConversationBubble = forwardRef<
)}
</div>
<div
className={`relative mr-2 flex items-center justify-center ${
type !== 'ERROR' && showFeedback ? '' : 'md:invisible'
className={`relative mr-2 flex items-center justify-center md:invisible ${
type !== 'ERROR' ? 'group-hover:md:visible' : ''
}`}
>
{copied ? (
@ -167,10 +162,10 @@ const ConversationBubble = forwardRef<
)}
</div>
<div
className={`relative mr-2 flex items-center justify-center ${
feedback === 'LIKE' || (type !== 'ERROR' && showFeedback)
? ''
: 'md:invisible'
className={`relative mr-2 flex items-center justify-center md:invisible ${
feedback === 'LIKE' || type !== 'ERROR'
? 'group-hover:md:visible'
: ''
}`}
>
<Like
@ -183,10 +178,10 @@ const ConversationBubble = forwardRef<
></Like>
</div>
<div
className={`relative mr-10 flex items-center justify-center ${
feedback === 'DISLIKE' || (type !== 'ERROR' && showFeedback)
? ''
: 'md:invisible'
className={`relative mr-10 flex items-center justify-center md:invisible ${
feedback === 'DISLIKE' || type !== 'ERROR'
? 'group-hover:md:visible'
: ''
}`}
>
<Dislike

@ -55,8 +55,9 @@ export default function Upload({
setProgress(undefined);
setModalState('INACTIVE');
}}
className={`rounded-3xl bg-purple-30 px-4 py-2 text-sm font-medium text-white ${isCancellable ? '' : 'hidden'
}`}
className={`rounded-3xl bg-purple-30 px-4 py-2 text-sm font-medium text-white ${
isCancellable ? '' : 'hidden'
}`}
>
Finish
</button>
@ -205,9 +206,10 @@ export default function Upload({
<div className="flex flex-row-reverse">
<button
onClick={uploadFile}
className={`ml-6 rounded-3xl bg-purple-30 text-white ${files.length > 0 ? '' : 'bg-opacity-75 text-opacity-80'
} py-2 px-6`}
disabled={files.length === 0} // Disable the button if no file is selected
className={`ml-6 rounded-3xl bg-purple-30 text-white ${
files.length > 0 ? '' : 'bg-opacity-75 text-opacity-80'
} py-2 px-6`}
disabled={files.length === 0} // Disable the button if no file is selected
>
Train
</button>
@ -228,8 +230,9 @@ export default function Upload({
return (
<article
className={`${modalState === 'ACTIVE' ? 'visible' : 'hidden'
} absolute z-30 h-screen w-screen bg-gray-alpha`}
className={`${
modalState === 'ACTIVE' ? 'visible' : 'hidden'
} absolute z-30 h-screen w-screen bg-gray-alpha`}
>
<article className="mx-auto mt-24 flex w-[90vw] max-w-lg flex-col gap-4 rounded-lg bg-white p-6 shadow-lg">
{view}

Loading…
Cancel
Save