Added better documentation to web server example in docs (#603)

* Added modal labs example to documentation

* Added gpt4all chat
This commit is contained in:
Andriy Mulyar 2023-05-16 14:17:35 -04:00 committed by GitHub
parent 3b407a3bd1
commit 96cedc2558
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -13,15 +13,14 @@ GPT4All Chat comes with a built-in server mode allowing you to programmatically
with any supported local LLM through a *very familiar* HTTP API. You can find the API documentation [here](https://platform.openai.com/docs/api-reference/completions).
Enabling server mode in the chat client will spin-up on an HTTP server running on `localhost` port
`4891` (the reverse of 1984).
`4891` (the reverse of 1984). You can enable the webserver via `GPT4All Chat > Settings > Enable web server`.
You can begin using local LLMs in your AI powered apps by changing a single line of code: the bath path for requests.
Begin using local LLMs in your AI powered apps by changing a single line of code: the bath path for requests.
```python
import os
import openai
openai.api_base = "http://localhost:4891/v1"
openai.api_base = "http://localhost:4891/v1"
#openai.api_base = "https://api.openai.com/v1"
openai.api_key = "not needed for a local LLM"
@ -48,3 +47,27 @@ response = openai.Completion.create(
# Print the generated completion
print(response)
```
which gives the following response
```json
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"text": "Who is Michael Jordan?\nMichael Jordan is a former professional basketball player who played for the Chicago Bulls in the NBA. He was born on December 30, 1963, and retired from playing basketball in 1998."
}
],
"created": 1684260896,
"id": "foobarbaz",
"model": "gpt4all-j-v1.3-groovy",
"object": "text_completion",
"usage": {
"completion_tokens": 35,
"prompt_tokens": 39,
"total_tokens": 74
}
}
```