From e74f523071eb0ddf7f7d7a93855dffdd716d1e88 Mon Sep 17 00:00:00 2001 From: Andriy Mulyar Date: Tue, 16 May 2023 14:23:37 -0400 Subject: [PATCH] Chat doc fixes (#604) * Added modal labs example to documentation * Added gpt4all chat * Typo --------- Signed-off-by: Andriy Mulyar --- gpt4all-bindings/python/docs/gpt4all_chat.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/gpt4all-bindings/python/docs/gpt4all_chat.md b/gpt4all-bindings/python/docs/gpt4all_chat.md index 2db0c239..ff600e67 100644 --- a/gpt4all-bindings/python/docs/gpt4all_chat.md +++ b/gpt4all-bindings/python/docs/gpt4all_chat.md @@ -15,7 +15,7 @@ with any supported local LLM through a *very familiar* HTTP API. You can find th Enabling server mode in the chat client will spin-up on an HTTP server running on `localhost` port `4891` (the reverse of 1984). You can enable the webserver via `GPT4All Chat > Settings > Enable web server`. -Begin using local LLMs in your AI powered apps by changing a single line of code: the bath path for requests. +Begin using local LLMs in your AI powered apps by changing a single line of code: the base path for requests. ```python import openai