Merge branch 'main' into aboutPage_alignment

pull/653/head
Alex 9 months ago committed by GitHub
commit 78dd1e1d81
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -7,7 +7,7 @@
</p> </p>
<p align="left"> <p align="left">
<strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers. <strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> is a cutting-edge open-source solution that streamlines the process of finding information in the project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
Say goodbye to time-consuming manual searches, and let <strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance. Say goodbye to time-consuming manual searches, and let <strong><a href="https://docsgpt.arc53.com/">DocsGPT</a></strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.
</p> </p>
@ -24,6 +24,7 @@ Say goodbye to time-consuming manual searches, and let <strong><a href="https://
### Production Support / Help for companies: ### Production Support / Help for companies:
We're eager to provide personalized assistance when deploying your DocsGPT to a live environment. We're eager to provide personalized assistance when deploying your DocsGPT to a live environment.
- [Book Demo 👋](https://airtable.com/appdeaL0F1qV8Bl2C/shrrJF1Ll7btCJRbP) - [Book Demo 👋](https://airtable.com/appdeaL0F1qV8Bl2C/shrrJF1Ll7btCJRbP)
- [Send Email ✉️](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions) - [Send Email ✉️](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions)
@ -31,7 +32,6 @@ We're eager to provide personalized assistance when deploying your DocsGPT to a
![video-example-of-docs-gpt](https://d3dg1063dc54p9.cloudfront.net/videos/demov3.gif) ![video-example-of-docs-gpt](https://d3dg1063dc54p9.cloudfront.net/videos/demov3.gif)
## Roadmap ## Roadmap
You can find our roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT! You can find our roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
@ -39,20 +39,17 @@ You can find our roadmap [here](https://github.com/orgs/arc53/projects/2). Pleas
## Our Open-Source models optimized for DocsGPT: ## Our Open-Source models optimized for DocsGPT:
| Name | Base Model | Requirements (or similar) | | Name | Base Model | Requirements (or similar) |
|-------------------|------------|----------------------------------------------------------| | --------------------------------------------------------------------- | ----------- | ------------------------- |
| [Docsgpt-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon) | Falcon-7b | 1xA10G gpu | | [Docsgpt-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon) | Falcon-7b | 1xA10G gpu |
| [Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b) | llama-2-14b | 2xA10 gpu's | | [Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b) | llama-2-14b | 2xA10 gpu's |
| [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's | | [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's |
If you don't have enough resources to run it, you can use bitsnbytes to quantize. If you don't have enough resources to run it, you can use bitsnbytes to quantize.
## Features ## Features
![Group 9](https://user-images.githubusercontent.com/17906039/220427472-2644cff4-7666-46a5-819f-fc4a521f63c7.png) ![Group 9](https://user-images.githubusercontent.com/17906039/220427472-2644cff4-7666-46a5-819f-fc4a521f63c7.png)
## Useful links ## Useful links
- 🔍🔥 [Live preview](https://docsgpt.arc53.com/) - 🔍🔥 [Live preview](https://docsgpt.arc53.com/)
@ -67,10 +64,8 @@ If you don't have enough resources to run it, you can use bitsnbytes to quantize
- 🏠🔐 [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM) - 🏠🔐 [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM)
## Project structure ## Project structure
- Application - Flask app (main application). - Application - Flask app (main application).
- Extensions - Chrome extension. - Extensions - Chrome extension.
@ -92,30 +87,30 @@ It will install all the dependencies and allow you to download the local model o
Otherwise, refer to this Guide: Otherwise, refer to this Guide:
1. Download and open this repository with `git clone https://github.com/arc53/DocsGPT.git` 1. Download and open this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys) and `VITE_API_STREAMING` to true or false, depending on if you want streaming answers or not. 2. Create a `.env` file in your root directory and set the env variable `OPENAI_API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys) and `VITE_API_STREAMING` to true or false, depending on whether you want streaming answers or not.
It should look like this inside: It should look like this inside:
``` ```
API_KEY=Yourkey API_KEY=Yourkey
VITE_API_STREAMING=true VITE_API_STREAMING=true
``` ```
See optional environment variables in the [/.env-template](https://github.com/arc53/DocsGPT/blob/main/.env-template) and [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) files. See optional environment variables in the [/.env-template](https://github.com/arc53/DocsGPT/blob/main/.env-template) and [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) files.
3. Run [./run-with-docker-compose.sh](https://github.com/arc53/DocsGPT/blob/main/run-with-docker-compose.sh). 3. Run [./run-with-docker-compose.sh](https://github.com/arc53/DocsGPT/blob/main/run-with-docker-compose.sh).
4. Navigate to http://localhost:5173/. 4. Navigate to http://localhost:5173/.
To stop, just run `Ctrl + C`. To stop, just run `Ctrl + C`.
## Development environments ## Development environments
### Spin up mongo and redis ### Spin up mongo and redis
For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo). For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo).
See file [docker-compose-dev.yaml](./docker-compose-dev.yaml). See file [docker-compose-dev.yaml](./docker-compose-dev.yaml).
Run Run
``` ```
docker compose -f docker-compose-dev.yaml build docker compose -f docker-compose-dev.yaml build
docker compose -f docker-compose-dev.yaml up -d docker compose -f docker-compose-dev.yaml up -d
@ -134,20 +129,25 @@ Make sure you have Python 3.10 or 3.11 installed.
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments. You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
a) On Mac OS and Linux a) On Mac OS and Linux
```commandline ```commandline
python -m venv venv python -m venv venv
. venv/bin/activate . venv/bin/activate
``` ```
b) On Windows b) On Windows
```commandline ```commandline
python -m venv venv python -m venv venv
venv/Scripts/activate venv/Scripts/activate
``` ```
3. Change to the `application/` subdir by the command `cd application/` and install dependencies for the backend: 3. Change to the `application/` subdir by the command `cd application/` and install dependencies for the backend:
```commandline ```commandline
pip install -r requirements.txt pip install -r requirements.txt
``` ```
4. Run the app using `flask run --host=0.0.0.0 --port=7091`. 4. Run the app using `flask run --host=0.0.0.0 --port=7091`.
5. Start worker with `celery -A application.app.celery worker -l INFO`. 5. Start worker with `celery -A application.app.celery worker -l INFO`.
@ -156,28 +156,32 @@ pip install -r requirements.txt
Make sure you have Node version 16 or higher. Make sure you have Node version 16 or higher.
1. Navigate to the [/frontend](https://github.com/arc53/DocsGPT/tree/main/frontend) folder. 1. Navigate to the [/frontend](https://github.com/arc53/DocsGPT/tree/main/frontend) folder.
2. Install required packages `husky` and `vite` (ignore if installed). 2. Install the required packages `husky` and `vite` (ignore if already installed).
```commandline ```commandline
npm install husky -g npm install husky -g
npm install vite -g npm install vite -g
``` ```
3. Install dependencies by running `npm install --include=dev`. 3. Install dependencies by running `npm install --include=dev`.
4. Run the app using `npm run dev`. 4. Run the app using `npm run dev`.
## Contributing ## Contributing
Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information about how to get involved. We welcome issues, questions, and pull requests. Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information about how to get involved. We welcome issues, questions, and pull requests.
## Code Of Conduct ## Code Of Conduct
We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more information about contributing. We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more information about contributing.
## Many Thanks To Our Contributors ## Many Thanks To Our Contributors
<a href="[https://github.com/arc53/DocsGPT/graphs/contributors](https://docsgpt.arc53.com/)"> <a href="[https://github.com/arc53/DocsGPT/graphs/contributors](https://docsgpt.arc53.com/)" alt="View Contributors">
<img src="https://contrib.rocks/image?repo=arc53/DocsGPT" /> <img src="https://contrib.rocks/image?repo=arc53/DocsGPT" alt="Contributors" />
</a> </a>
## License ## License
The source code license is [MIT](https://opensource.org/license/mit/), as described in the [LICENSE](LICENSE) file. The source code license is [MIT](https://opensource.org/license/mit/), as described in the [LICENSE](LICENSE) file.
Built with [🦜️🔗 LangChain](https://github.com/hwchase17/langchain) Built with [🦜️🔗 LangChain](https://github.com/hwchase17/langchain)

@ -84,6 +84,19 @@ def api_feedback():
) )
return {"status": http.client.responses.get(response.status_code, "ok")} return {"status": http.client.responses.get(response.status_code, "ok")}
@user.route("/api/delete_by_ids", methods=["get"])
def delete_by_ids():
"""Delete by ID. These are the IDs in the vectorstore"""
ids = request.args.get("path")
if not ids:
return {"status": "error"}
if settings.VECTOR_STORE == "faiss":
result = vectors_collection.delete_index(ids=ids)
if result:
return {"status": "ok"}
return {"status": "error"}
@user.route("/api/delete_old", methods=["get"]) @user.route("/api/delete_old", methods=["get"])
def delete_old(): def delete_old():

@ -27,6 +27,9 @@ class FaissStore(BaseVectorStore):
def save_local(self, *args, **kwargs): def save_local(self, *args, **kwargs):
return self.docsearch.save_local(*args, **kwargs) return self.docsearch.save_local(*args, **kwargs)
def delete_index(self, *args, **kwargs):
return self.docsearch.delete(*args, **kwargs)
def assert_embedding_dimensions(self, embeddings): def assert_embedding_dimensions(self, embeddings):
""" """
Check that the word embedding dimension of the docsearch index matches Check that the word embedding dimension of the docsearch index matches
@ -41,4 +44,3 @@ class FaissStore(BaseVectorStore):
if word_embedding_dimension != docsearch_index_dimension: if word_embedding_dimension != docsearch_index_dimension:
raise ValueError(f"word_embedding_dimension ({word_embedding_dimension}) " + raise ValueError(f"word_embedding_dimension ({word_embedding_dimension}) " +
f"!= docsearch_index_word_embedding_dimension ({docsearch_index_dimension})") f"!= docsearch_index_word_embedding_dimension ({docsearch_index_dimension})")

@ -50,4 +50,4 @@ yarn dev
- Now, you should be able to view the docs on your local environment by visiting `http://localhost:5000`. You can explore the different markdown files and make changes as you see fit. - Now, you should be able to view the docs on your local environment by visiting `http://localhost:5000`. You can explore the different markdown files and make changes as you see fit.
- Footnotes: This guide assumes you have Node.js and npm installed. The guide involves running a local server using yarn, and viewing the documentation offline. If you encounter any issues, it may be worth verifying your Node.js and npm installations and whether you have installed yarn correctly. - **Footnotes:** This guide assumes you have Node.js and npm installed. The guide involves running a local server using yarn, and viewing the documentation offline. If you encounter any issues, it may be worth verifying your Node.js and npm installations and whether you have installed yarn correctly.

@ -102,3 +102,7 @@ Repeat the process for port `7091`.
#### Access your instance #### Access your instance
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT! Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!
## Other Deployment Options
- [Deploy DocsGPT on Civo Compute Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c)

@ -1,9 +1,25 @@
Currently, the application provides the following main API endpoints: # API Endpoints Documentation
### /api/answer *Currently, the application provides the following main API endpoints:*
It's a POST request that sends a JSON in body with 4 values. It will receive an answer for a user provided question.
Here is a JavaScript fetch example:
### 1. /api/answer
**Description:**
This endpoint is used to request answers to user-provided questions.
**Request:**
Method: POST
Headers: Content-Type should be set to "application/json; charset=utf-8"
Request Body: JSON object with the following fields:
* **question:** The user's question
* **history:** (Optional) Previous conversation history
* **api_key:** Your API key
* **embeddings_key:** Your embeddings key
* **active_docs:** The location of active documentation
Here is a JavaScript Fetch Request example:
```js ```js
// answer (POST http://127.0.0.1:5000/api/answer) // answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", { fetch("http://127.0.0.1:5000/api/answer", {
@ -18,8 +34,9 @@ fetch("http://127.0.0.1:5000/api/answer", {
.then(console.log.bind(console)) .then(console.log.bind(console))
``` ```
In response, you will get a JSON document like this one: **Response**
In response, you will get a JSON document containing the answer,query and the result:
```json ```json
{ {
"answer": " Hi there! How can I help you?\n", "answer": " Hi there! How can I help you?\n",
@ -28,10 +45,17 @@ In response, you will get a JSON document like this one:
} }
``` ```
### /api/docs_check ### 2. /api/docs_check
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
It's a POST request that sends a JSON in a body with 1 value. Here is a JavaScript fetch example: **Description:**
This endpoint will make sure documentation is loaded on the server (just run it every time user is switching between libraries (documentations)).
**Request:**
Headers: Content-Type should be set to "application/json; charset=utf-8"
Request Body: JSON object with the field:
* **docs:** The location of the documentation
```js ```js
// answer (POST http://127.0.0.1:5000/api/docs_check) // answer (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", { fetch("http://127.0.0.1:5000/api/docs_check", {
@ -45,7 +69,9 @@ fetch("http://127.0.0.1:5000/api/docs_check", {
.then(console.log.bind(console)) .then(console.log.bind(console))
``` ```
In response, you will get a JSON document like this one: **Response:**
In response, you will get a JSON document like this one indicating whether the documentation exists or not.:
```json ```json
{ {
"status": "exists" "status": "exists"
@ -53,18 +79,36 @@ In response, you will get a JSON document like this one:
``` ```
### /api/combine ### 3. /api/combine
Provides JSON that tells UI which vectors are available and where they are located with a simple get request. **Description:**
This endpoint provides information about available vectors and their locations with a simple GET request.
**Request:**
Method: GET
**Response:**
Response will include: Response will include:
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`. `date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.
Example of JSON in Docshub and local: Example of JSON in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png"> <img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
### /api/upload ### 4. /api/upload
Uploads file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress **Description:**
This endpoint is used to upload a file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress.
**Request:**
Method: POST
Request Body: A multipart/form-data form with file upload and additional fields, including "user" and "name."
HTML example: HTML example:
```html ```html
@ -79,20 +123,24 @@ HTML example:
</form> </form>
``` ```
Response: **Response:**
```json
{ JSON response with a status and a task ID that can be used to check the task's progress.
"status": "ok",
"task_id": "b2684988-9047-428b-bd47-08518679103c"
}
```
### /api/task_status ### 5. /api/task_status
Gets task status (`task_id`) from `/api/upload`: **Description:**
This endpoint is used to get the status of a task (`task_id`) from `/api/upload`
**Request:**
Method: GET
Query Parameter: task_id (task ID to check)
**Sample JavaScript Fetch Request:**
```js ```js
// Task status (Get http://127.0.0.1:5000/api/task_status) // Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", { fetch("http://localhost:5001/api/task_status?task_id=YOUR_TASK_ID", {
"method": "GET", "method": "GET",
"headers": { "headers": {
"Content-Type": "application/json; charset=utf-8" "Content-Type": "application/json; charset=utf-8"
@ -102,7 +150,8 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
.then(console.log.bind(console)) .then(console.log.bind(console))
``` ```
Responses: **Response:**
There are two types of responses: There are two types of responses:
1. While the task is still running, the 'current' value will show progress from 0 to 100. 1. While the task is still running, the 'current' value will show progress from 0 to 100.
@ -134,9 +183,14 @@ There are two types of responses:
} }
``` ```
### /api/delete_old ### 6. /api/delete_old
Deletes old Vector Stores: **Description:**
This endpoint is used to delete old Vector Stores.
**Request:**
Method: GET
```js ```js
// Task status (GET http://127.0.0.1:5000/api/docs_check) // Task status (GET http://127.0.0.1:5000/api/docs_check)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", { fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
@ -148,8 +202,10 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
.then((res) => res.text()) .then((res) => res.text())
.then(console.log.bind(console)) .then(console.log.bind(console))
Response: ```
**Response:**
JSON response indicating the status of the operation.
```json ```json
{ "status": "ok" } { "status": "ok" }
``` ```

@ -14,9 +14,9 @@ import "docsgpt/dist/style.css";
Then you can use it like this: `<DocsGPTWidget />` Then you can use it like this: `<DocsGPTWidget />`
DocsGPTWidget takes 3 props: DocsGPTWidget takes 3 props:
- `apiHost` — URL of your DocsGPT API. 1. `apiHost` — URL of your DocsGPT API.
- `selectDocs` — documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`). 2. `selectDocs` — documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`).
- `apiKey` — usually it's empty. 3. `apiKey` — usually it's empty.
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX) ### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content: Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:

@ -1,4 +1,27 @@
## To customize a main prompt, navigate to `/application/prompt/combine_prompt.txt` # Customizing the Main Prompt
You can try editing it to see how the model responses. To customize the main prompt for DocsGPT, follow these steps:
1. Navigate to `/application/prompt/combine_prompt.txt`.
2. Edit the `combine_prompt.txt` file to modify the prompt text. You can experiment with different phrasings and structures to see how the model responds.
## Example Prompt Modification
**Original Prompt:**
```markdown
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
Use the following pieces of context to help answer the users question. If its not relevant to the question, provide friendly responses.
You have access to chat history, and can use it to help answer the question.
When using code examples, use the following format:
(code)
{summaries}
```
## Conclusion
Customizing the main prompt for DocsGPT allows you to tailor the AI's responses to your unique requirements. Whether you need in-depth explanations, code examples, or specific insights, you can achieve it by modifying the main prompt. Remember to experiment and fine-tune your prompts to get the best results.

@ -12,28 +12,28 @@ It currently uses OPEN_AI to create the vector store, so make sure your document
You can usually find documentation on Github in `docs/` folder for most open-source projects. You can usually find documentation on Github in `docs/` folder for most open-source projects.
### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory ### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
- Name it `inputs/` - Name it `inputs/`.
- Put all your .rst/.md files in there - Put all your .rst/.md files in there.
- The search is recursive, so you don't need to flatten them - The search is recursive, so you don't need to flatten them.
If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script) If there are no .rst/.md files just convert whatever you find to .txt file and feed it. (don't forget to change the extension in script)
### 2. Create .env file in `scripts/` folder ### 2. Create .env file in `scripts/` folder
And write your OpenAI API key inside And write your OpenAI API key inside
`OPENAI_API_KEY=<your-api-key>` `OPENAI_API_KEY=<your-api-key>`.
### 3. Run scripts/ingest.py ### 3. Run scripts/ingest.py
`python ingest.py ingest` `python ingest.py ingest`
It will tell you how much it will cost It will tell you how much it will cost.
### 4. Move `index.faiss` and `index.pkl` generated in `scripts/output` to `application/` folder. ### 4. Move `index.faiss` and `index.pkl` generated in `scripts/output` to `application/` folder.
### 5. Run web app ### 5. Run web app
Once you run it will use new context that is relevant to your documentation Once you run it will use new context that is relevant to your documentation.
Make sure you select default in the dropdown in the UI Make sure you select default in the dropdown in the UI.
## Customization ## Customization
You can learn more about options while running ingest.py by running: You can learn more about options while running ingest.py by running:

@ -1,10 +1,10 @@
Fortunately, there are many providers for LLM's and some of them can even be run locally Fortunately, there are many providers for LLMs, and some of them can even be run locally.
There are two models used in the app: There are two models used in the app:
1. Embeddings. 1. Embeddings.
2. Text generation. 2. Text generation.
By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple! By default, we use OpenAI's models, but if you want to change it or even run it locally, it's very simple!
### Go to .env file or set environment variables: ### Go to .env file or set environment variables:
@ -31,6 +31,6 @@ Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choo
That's it! That's it!
### Hosting everything locally and privately (for using our optimised open-source models) ### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and don't want anything to leave your premises. If you are working with critical data and don't want anything to leave your premises.
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face. Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable, and for your `LLM_NAME`, you can use anything that is on Hugging Face.

@ -255,7 +255,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
src={Arrow2} src={Arrow2}
alt="arrow" alt="arrow"
className={`${ className={`${
isDocsListOpen ? 'rotate-0' : 'rotate-180' !isDocsListOpen ? 'rotate-0' : 'rotate-180'
} ml-auto mr-3 w-3 transition-all`} } ml-auto mr-3 w-3 transition-all`}
/> />
</div> </div>
@ -362,7 +362,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
</a> </a>
</div> </div>
</div> </div>
<div className="fixed h-16 w-full border-b-2 bg-gray-50 md:hidden"> <div className="fixed z-10 h-16 w-full border-b-2 bg-gray-50 md:hidden">
<button <button
className="mt-5 ml-6 h-6 w-6 md:hidden" className="mt-5 ml-6 h-6 w-6 md:hidden"
onClick={() => setNavOpen(true)} onClick={() => setNavOpen(true)}

@ -0,0 +1,11 @@
.list p {
display: inline;
}
.list li:not(:first-child) {
margin-top: 1em;
}
.list li > .list {
margin-top: 1em;
}

@ -1,6 +1,7 @@
import { forwardRef, useState } from 'react'; import { forwardRef, useState } from 'react';
import Avatar from '../Avatar'; import Avatar from '../Avatar';
import { FEEDBACK, MESSAGE_TYPE } from './conversationModels'; import { FEEDBACK, MESSAGE_TYPE } from './conversationModels';
import classes from './ConversationBubble.module.css';
import Alert from './../assets/alert.svg'; import Alert from './../assets/alert.svg';
import { ReactComponent as Like } from './../assets/like.svg'; import { ReactComponent as Like } from './../assets/like.svg';
import { ReactComponent as Dislike } from './../assets/dislike.svg'; import { ReactComponent as Dislike } from './../assets/dislike.svg';
@ -27,7 +28,6 @@ const ConversationBubble = forwardRef<
{ message, type, className, feedback, handleFeedback, sources }, { message, type, className, feedback, handleFeedback, sources },
ref, ref,
) { ) {
const [showFeedback, setShowFeedback] = useState(false);
const [openSource, setOpenSource] = useState<number | null>(null); const [openSource, setOpenSource] = useState<number | null>(null);
const [copied, setCopied] = useState(false); const [copied, setCopied] = useState(false);
@ -40,16 +40,6 @@ const ConversationBubble = forwardRef<
}, 2000); }, 2000);
}; };
const List = ({
ordered,
children,
}: {
ordered?: boolean;
children: React.ReactNode;
}) => {
const Tag = ordered ? 'ol' : 'ul';
return <Tag className="list-inside list-disc">{children}</Tag>;
};
let bubble; let bubble;
if (type === 'QUESTION') { if (type === 'QUESTION') {
@ -65,12 +55,7 @@ const ConversationBubble = forwardRef<
); );
} else { } else {
bubble = ( bubble = (
<div <div ref={ref} className={`flex self-start ${className} group flex-col`}>
ref={ref}
className={`flex self-start ${className} flex-col`}
onMouseEnter={() => setShowFeedback(true)}
onMouseLeave={() => setShowFeedback(false)}
>
<div className="flex self-start"> <div className="flex self-start">
<Avatar className="mt-2 text-2xl" avatar="🦖"></Avatar> <Avatar className="mt-2 text-2xl" avatar="🦖"></Avatar>
<div <div
@ -104,11 +89,23 @@ const ConversationBubble = forwardRef<
</code> </code>
); );
}, },
ul({ node, children }) { ul({ children }) {
return <List>{children}</List>; return (
<ul
className={`list-inside list-disc whitespace-normal pl-4 ${classes.list}`}
>
{children}
</ul>
);
}, },
ol({ node, children }) { ol({ children }) {
return <List ordered>{children}</List>; return (
<ol
className={`list-inside list-decimal whitespace-normal pl-4 ${classes.list}`}
>
{children}
</ol>
);
}, },
}} }}
> >
@ -118,9 +115,7 @@ const ConversationBubble = forwardRef<
<> <>
<span className="mt-3 h-px w-full bg-[#DEDEDE]"></span> <span className="mt-3 h-px w-full bg-[#DEDEDE]"></span>
<div className="mt-3 flex w-full flex-row flex-wrap items-center justify-start gap-2"> <div className="mt-3 flex w-full flex-row flex-wrap items-center justify-start gap-2">
<div className="py-1 text-base font-semibold"> <div className="py-1 text-base font-semibold">Sources:</div>
Sources:
</div>
<div className="flex flex-row flex-wrap items-center justify-start gap-2"> <div className="flex flex-row flex-wrap items-center justify-start gap-2">
{sources?.map((source, index) => ( {sources?.map((source, index) => (
<div <div
@ -151,8 +146,8 @@ const ConversationBubble = forwardRef<
)} )}
</div> </div>
<div <div
className={`relative mr-2 flex items-center justify-center ${ className={`relative mr-2 flex items-center justify-center md:invisible ${
type !== 'ERROR' && showFeedback ? '' : 'md:invisible' type !== 'ERROR' ? 'group-hover:md:visible' : ''
}`} }`}
> >
{copied ? ( {copied ? (
@ -167,10 +162,10 @@ const ConversationBubble = forwardRef<
)} )}
</div> </div>
<div <div
className={`relative mr-2 flex items-center justify-center ${ className={`relative mr-2 flex items-center justify-center md:invisible ${
feedback === 'LIKE' || (type !== 'ERROR' && showFeedback) feedback === 'LIKE' || type !== 'ERROR'
? '' ? 'group-hover:md:visible'
: 'md:invisible' : ''
}`} }`}
> >
<Like <Like
@ -183,10 +178,10 @@ const ConversationBubble = forwardRef<
></Like> ></Like>
</div> </div>
<div <div
className={`relative mr-10 flex items-center justify-center ${ className={`relative mr-10 flex items-center justify-center md:invisible ${
feedback === 'DISLIKE' || (type !== 'ERROR' && showFeedback) feedback === 'DISLIKE' || type !== 'ERROR'
? '' ? 'group-hover:md:visible'
: 'md:invisible' : ''
}`} }`}
> >
<Dislike <Dislike

@ -55,7 +55,8 @@ export default function Upload({
setProgress(undefined); setProgress(undefined);
setModalState('INACTIVE'); setModalState('INACTIVE');
}} }}
className={`rounded-3xl bg-purple-30 px-4 py-2 text-sm font-medium text-white ${isCancellable ? '' : 'hidden' className={`rounded-3xl bg-purple-30 px-4 py-2 text-sm font-medium text-white ${
isCancellable ? '' : 'hidden'
}`} }`}
> >
Finish Finish
@ -205,7 +206,8 @@ export default function Upload({
<div className="flex flex-row-reverse"> <div className="flex flex-row-reverse">
<button <button
onClick={uploadFile} onClick={uploadFile}
className={`ml-6 rounded-3xl bg-purple-30 text-white ${files.length > 0 ? '' : 'bg-opacity-75 text-opacity-80' className={`ml-6 rounded-3xl bg-purple-30 text-white ${
files.length > 0 ? '' : 'bg-opacity-75 text-opacity-80'
} py-2 px-6`} } py-2 px-6`}
disabled={files.length === 0} // Disable the button if no file is selected disabled={files.length === 0} // Disable the button if no file is selected
> >
@ -228,7 +230,8 @@ export default function Upload({
return ( return (
<article <article
className={`${modalState === 'ACTIVE' ? 'visible' : 'hidden' className={`${
modalState === 'ACTIVE' ? 'visible' : 'hidden'
} absolute z-30 h-screen w-screen bg-gray-alpha`} } absolute z-30 h-screen w-screen bg-gray-alpha`}
> >
<article className="mx-auto mt-24 flex w-[90vw] max-w-lg flex-col gap-4 rounded-lg bg-white p-6 shadow-lg"> <article className="mx-auto mt-24 flex w-[90vw] max-w-lg flex-col gap-4 rounded-lg bg-white p-6 shadow-lg">

Loading…
Cancel
Save