mirror of
https://github.com/arc53/DocsGPT
synced 2024-11-03 23:15:37 +00:00
Merge pull request #512 from drk1rd/main
Grammar and punctuations improved
This commit is contained in:
commit
53ed6e54b5
@ -18,7 +18,7 @@ After that, it is time to pick your Instance Image. We recommend using "Linux/Un
|
||||
|
||||
As for instance plan, it'll vary depending on your unique demands, but a "1 GB, 1vCPU, 40GB SSD and 2TB transfer" setup should cover most scenarios.
|
||||
|
||||
Lastly, Identify your instance by giving it a unique name and then hit "Create instance".
|
||||
Lastly, identify your instance by giving it a unique name and then hit "Create instance".
|
||||
|
||||
PS: Once you create your instance, it'll likely take a few minutes for the setup to be completed.
|
||||
|
||||
@ -42,7 +42,7 @@ A terminal window will pop up, and the first step will be to clone the DocsGPT g
|
||||
|
||||
#### Download the package information
|
||||
|
||||
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so simply enter the following command:
|
||||
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
|
||||
|
||||
`sudo apt update`
|
||||
|
||||
@ -64,7 +64,7 @@ Enter the following command to access the folder in which DocsGPT docker-compose
|
||||
|
||||
#### Prepare the environment
|
||||
|
||||
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
|
||||
Inside the DocsGPT folder, create a `.env` file and copy the contents of `.env_sample` into it.
|
||||
|
||||
`nano .env`
|
||||
|
||||
@ -95,7 +95,7 @@ You're almost there! Now that all the necessary bits and pieces have been instal
|
||||
|
||||
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
|
||||
|
||||
Once this is done you can go ahead and close the terminal window.
|
||||
Once this is done, you can go ahead and close the terminal window.
|
||||
|
||||
#### Enabling ports
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
## Launching Web App
|
||||
Note: Make sure you have Docker installed
|
||||
|
||||
On Mac OS or Linux just write:
|
||||
On macOS or Linux, just write:
|
||||
|
||||
`./setup.sh`
|
||||
|
||||
@ -10,11 +10,11 @@ It will install all the dependencies and give you an option to download the loca
|
||||
Otherwise, refer to this Guide:
|
||||
|
||||
1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`.
|
||||
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI api key](https://platform.openai.com/account/api-keys).
|
||||
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys).
|
||||
3. Run `docker-compose build && docker-compose up`.
|
||||
4. Navigate to `http://localhost:5173/`.
|
||||
|
||||
To stop just run `Ctrl + C`.
|
||||
To stop, just run `Ctrl + C`.
|
||||
|
||||
### Chrome Extension
|
||||
|
||||
|
@ -18,7 +18,7 @@ fetch("http://127.0.0.1:5000/api/answer", {
|
||||
.then(console.log.bind(console))
|
||||
```
|
||||
|
||||
In response you will get a json document like this one:
|
||||
In response, you will get a JSON document like this one:
|
||||
|
||||
```json
|
||||
{
|
||||
@ -30,7 +30,7 @@ In response you will get a json document like this one:
|
||||
|
||||
### /api/docs_check
|
||||
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
|
||||
It's a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example:
|
||||
It's a POST request that sends a JSON in a body with 1 value. Here is a JavaScript fetch example:
|
||||
|
||||
```js
|
||||
// answer (POST http://127.0.0.1:5000/api/docs_check)
|
||||
@ -45,7 +45,7 @@ fetch("http://127.0.0.1:5000/api/docs_check", {
|
||||
.then(console.log.bind(console))
|
||||
```
|
||||
|
||||
In response you will get a json document like this one:
|
||||
In response, you will get a JSON document like this one:
|
||||
```json
|
||||
{
|
||||
"status": "exists"
|
||||
@ -54,17 +54,17 @@ In response you will get a json document like this one:
|
||||
|
||||
|
||||
### /api/combine
|
||||
Provides json that tells UI which vectors are available and where they are located with a simple get request.
|
||||
Provides JSON that tells UI which vectors are available and where they are located with a simple get request.
|
||||
|
||||
Response will include:
|
||||
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.
|
||||
|
||||
Example of json in Docshub and local:
|
||||
Example of JSON in Docshub and local:
|
||||
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
|
||||
|
||||
|
||||
### /api/upload
|
||||
Uploads file that needs to be trained, response is json with task id, which can be used to check on tasks progress
|
||||
Uploads file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress
|
||||
HTML example:
|
||||
|
||||
```html
|
||||
@ -104,7 +104,7 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
|
||||
|
||||
Responses:
|
||||
There are two types of responses:
|
||||
1. while task it still running, where "current" will show progress from 0 to 100
|
||||
1. While task is still running, where "current" will show progress from 0 to 100
|
||||
```json
|
||||
{
|
||||
"result": {
|
||||
|
@ -13,7 +13,7 @@ chatwoot_token=<from part 2>
|
||||
|
||||
5. Start with `flask run` command.
|
||||
|
||||
If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation.
|
||||
If you want for bot to stop responding to questions for a specific user or session, just add a label `human-requested` in your conversation.
|
||||
|
||||
|
||||
### Optional (extra validation)
|
||||
@ -26,4 +26,4 @@ account_id=(optional) 1
|
||||
assignee_id=(optional) 1
|
||||
```
|
||||
|
||||
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.
|
||||
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.
|
||||
|
@ -4,7 +4,7 @@
|
||||
Got to your project and install a new dependency: `npm install docsgpt`.
|
||||
|
||||
### Usage
|
||||
Go to your project and in the file where you want to use the widget import it:
|
||||
Go to your project and in the file where you want to use the widget, import it:
|
||||
```js
|
||||
import { DocsGPTWidget } from "docsgpt";
|
||||
import "docsgpt/dist/style.css";
|
||||
@ -14,12 +14,12 @@ import "docsgpt/dist/style.css";
|
||||
Then you can use it like this: `<DocsGPTWidget />`
|
||||
|
||||
DocsGPTWidget takes 3 props:
|
||||
- `apiHost` — url of your DocsGPT API.
|
||||
- `selectDocs` — documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`).
|
||||
- `apiKey` — usually its empty.
|
||||
- `apiHost` — URL of your DocsGPT API.
|
||||
- `selectDocs` — documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`).
|
||||
- `apiKey` — usually it's empty.
|
||||
|
||||
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
|
||||
Install you widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
|
||||
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
|
||||
```js
|
||||
import { DocsGPTWidget } from "docsgpt";
|
||||
import "docsgpt/dist/style.css";
|
||||
|
@ -1,4 +1,4 @@
|
||||
## To customize a main prompt navigate to `/application/prompt/combine_prompt.txt`
|
||||
## To customize a main prompt, navigate to `/application/prompt/combine_prompt.txt`
|
||||
|
||||
You can try editing it to see how the model responses.
|
||||
|
||||
|
@ -5,18 +5,18 @@ This AI can use any documentation, but first it needs to be prepared for similar
|
||||
|
||||
Start by going to `/scripts/` folder.
|
||||
|
||||
If you open this file you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
|
||||
If you open this file, you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
|
||||
|
||||
It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$.
|
||||
It currently uses OPEN_AI to create the vector store, so make sure your documentation is not too big. Pandas cost me around $3-$4.
|
||||
|
||||
You can usually find documentation on github in `docs/` folder for most open-source projects.
|
||||
You can usually find documentation on Github in `docs/` folder for most open-source projects.
|
||||
|
||||
### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
|
||||
- Name it `inputs/`
|
||||
- Put all your .rst/.md files in there
|
||||
- The search is recursive, so you don't need to flatten them
|
||||
|
||||
If there are no .rst/.md files just convert whatever you find to txt and feed it. (don't forget to change the extension in script)
|
||||
If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script)
|
||||
|
||||
### 2. Create .env file in `scripts/` folder
|
||||
And write your OpenAI API key inside
|
||||
@ -32,7 +32,7 @@ It will tell you how much it will cost
|
||||
|
||||
|
||||
### 5. Run web app
|
||||
Once you run it will use new context that is relevant to your documentation
|
||||
Once you run it will use new context that is relevant to your documentation
|
||||
Make sure you select default in the dropdown in the UI
|
||||
|
||||
## Customization
|
||||
@ -41,7 +41,7 @@ You can learn more about options while running ingest.py by running:
|
||||
`python ingest.py --help`
|
||||
| Options | |
|
||||
|:--------------------------------:|:------------------------------------------------------------------------------------------------------------------------------:|
|
||||
| **ingest** | Runs 'ingest' function converting documentation to Faiss plus Index format |
|
||||
| **ingest** | Runs 'ingest' function, converting documentation to Faiss plus Index format |
|
||||
| --dir TEXT | List of paths to directory for index creation. E.g. --dir inputs --dir inputs2 [default: inputs] |
|
||||
| --file TEXT | File paths to use (Optional; overrides directory) E.g. --files inputs/1.md --files inputs/2.md |
|
||||
| --recursive / --no-recursive | Whether to recursively search in subdirectories [default: recursive] |
|
||||
|
@ -1,4 +1,4 @@
|
||||
Fortunately there are many providers for LLM's and some of them can even be ran locally
|
||||
Fortunately, there are many providers for LLM's and some of them can even be run locally
|
||||
|
||||
There are two models used in the app:
|
||||
1. Embeddings.
|
||||
@ -29,4 +29,4 @@ That's it!
|
||||
### Hosting everything locally and privately (for using our optimised open-source models)
|
||||
If you are working with important data and don't want anything to leave your premises.
|
||||
|
||||
Make sure you set `SELF_HOSTED_MODEL` as true in you `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
|
||||
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
|
||||
|
@ -1,4 +1,4 @@
|
||||
If your AI uses external knowledge and is not explicit enough it is ok, because we try to make docsgpt friendly.
|
||||
If your AI uses external knowledge and is not explicit enough, it is ok, because we try to make docsgpt friendly.
|
||||
|
||||
But if you want to adjust it, here is a simple way.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user