mirror of https://github.com/nomic-ai/gpt4all
Chat Client Documentation (#596)
* GPT4All Chat Client Documentation * Updated documentation wordingpull/597/head
parent
3cb6dd7a66
commit
17de7f0529
@ -0,0 +1,54 @@
|
||||
# GPT4All Chat Client
|
||||
|
||||
The [GPT4All Chat Client](https://gpt4all.io) lets you easily interact with any local large language model.
|
||||
|
||||
It is optimized to run 7-13B parameter LLMs on the CPU's of any computer in computer running OSX/Windows/Linux.
|
||||
|
||||
|
||||
|
||||
|
||||
## GPT4All Chat Server Mode
|
||||
|
||||
GPT4All Chat comes with a built-in server mode allowing you to programmatically interact
|
||||
with any supported local LLM through a *very familiar* HTTP API. You can find the API documentation [here](https://platform.openai.com/docs/api-reference/completions).
|
||||
|
||||
Enabling server mode in the chat client will spin-up on an HTTP server running on `localhost` port
|
||||
`4891` (the reverse of 1984).
|
||||
|
||||
You can begin using local LLMs in your AI powered apps by changing a single line of code: the bath path for requests.
|
||||
|
||||
```python
|
||||
import os
|
||||
import openai
|
||||
|
||||
openai.api_base = "http://localhost:4891/v1"
|
||||
#openai.api_base = "https://api.openai.com/v1"
|
||||
|
||||
# Read the OpenAI API key from an environment variable
|
||||
openai_api_key = os.environ.get("OPENAI_API_KEY")
|
||||
if not openai_api_key:
|
||||
raise ValueError("Please set the 'OPENAI_API_KEY' environment variable.")
|
||||
openai.api_key = openai_api_key
|
||||
|
||||
# Set up the prompt and other parameters for the API request
|
||||
prompt = "Who is Michael Jordan?"
|
||||
|
||||
# model = "gpt-3.5-turbo"
|
||||
#model = "mpt-7b-chat"
|
||||
model = "gpt4all-j-v1.3-groovy"
|
||||
|
||||
# Make the API request
|
||||
response = openai.Completion.create(
|
||||
model=model,
|
||||
prompt=prompt,
|
||||
max_tokens=50,
|
||||
temperature=0.28,
|
||||
top_p=0.95,
|
||||
n=1,
|
||||
echo=True,
|
||||
stream=False
|
||||
)
|
||||
|
||||
# Print the generated completion
|
||||
print(response)
|
||||
```
|
Loading…
Reference in New Issue