Commit Graph

174 Commits

Author SHA1 Message Date
385olt
3ed6d176a5
Python bindings: unicode decoding (#1281)
* rewrote the unicode decoding using the structure of multi-byte unicode symbols.
2023-07-30 11:29:51 -07:00
Andriy Mulyar
39acbc8378
Python version bump
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-27 12:19:23 -04:00
Jacob Nguyen
545c23b4bd
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update prebuild.js

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix gpt4all.js

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!

* version bump

* polish up spec and build scripts

* lock file refresh

* fix: proper resource closing and error handling

* check make sure libPath not null

* add msvc build script and update readme requirements

* python workflows in circleci

* dummy python change

* no need for main

* second hold for pypi deploy

* let me deploy pls

* bring back when condition

* Typo, ignore list  (#967)

Fix typo in javadoc,
Add word to ignore list for codespellrc

---------

Co-authored-by: felix <felix@zaslavskiy.net>

* llmodel: change tokenToString to not use string_view (#968)

fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)

* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)

* Initial Library Loader

* Load library as part of Model factory

* Dynamically search and find the dlls

* Update tests to use locally built runtimes

* Fix dylib loading, add macos runtime support for sample/tests

* Bypass automatic loading by default.

* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile

* Switch Loading again

* Update build scripts for mac/linux

* Update bindings to support newest breaking changes

* Fix build

* Use llmodel for Windows

* Actually, it does need to be libllmodel

* Name

* Remove TFMs, bypass loading by default

* Fix script

* Delete mac script

---------

Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>

* bump llama.cpp mainline to latest (#964)

* fix prompt context so it's preserved in class

* update setup.py

* metal replit (#931)

metal+replit

makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)

* update documentation scripts and generation to include readme.md

* update readme and documentation for source

* begin tests, import jest, fix listModels export

* fix typo

* chore: update spec

* fix: finally, reduced potential of empty string

* chore: add stub for createTokenSream

* refactor: protecting resources properly

* add basic jest tests

* update

* update readme

* refactor: namespace the res variable

* circleci integration to automatically build docs

* add starter docs

* typo

* more circle ci typo

* forgot to add nodejs circle ci orb

* fix circle ci

* feat: @iimez verify download and fix prebuild script

* fix: oops, option name wrong

* fix: gpt4all utils not emitting docs

* chore: fix up scripts

* fix: update docs and typings for md5 sum

* fix: macos compilation

* some refactoring

* Update index.cc

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* update readme and enable exceptions on mac

* circle ci progress

* basic embedding with sbert (not tested & cpp side only)

* fix circle ci

* fix circle ci

* update circle ci script

* bruh

* fix again

* fix

* fixed required workflows

* fix ci

* fix pwd

* fix pwd

* update ci

* revert

* fix

* prevent rebuild

* revmove noop

* Update continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update binding.gyp

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix fs not found

* remove cpp 20 standard

* fix warnings, safer way to calculate arrsize

* readd build backend

* basic embeddings and yarn test"

* fix circle ci

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

Update continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

fix macos paths

update readme and roadmap

split up spec

update readme

check for url in modelsjson

update docs and inline stuff

update yarn configuration and readme

update readme

readd npm publish script

add exceptions

bruh one space broke the yaml

codespell

oops forgot to add runtimes folder

bump version

try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images

add fallback for unknown architectures

attached to wrong workspace

hopefuly fix

moving everything under backend to persist

should work now

* update circle ci script

* prevent rebuild

* revmove noop

* Update continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update binding.gyp

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix fs not found

* remove cpp 20 standard

* fix warnings, safer way to calculate arrsize

* readd build backend

* basic embeddings and yarn test"

* fix circle ci

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

Update continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

fix macos paths

update readme and roadmap

split up spec

update readme

check for url in modelsjson

update docs and inline stuff

update yarn configuration and readme

update readme

readd npm publish script

add exceptions

bruh one space broke the yaml

codespell

oops forgot to add runtimes folder

bump version

try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images

add fallback for unknown architectures

attached to wrong workspace

hopefuly fix

moving everything under backend to persist

should work now

* Update README.md

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

---------

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 11:46:40 -04:00
Andriy Mulyar
41f640577c
Update setup.py (#1263)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-24 14:25:04 -04:00
cosmic-snow
6431d46776
Fix models not getting downloaded in Python bindings (#1262)
- custom callbacks & session improvements PR (v1.0.6) had one too many checks
- remove the problematic config['url'] check
- add a crude test
- fixes #1261
2023-07-24 12:57:06 -04:00
385olt
b4dbbd1485
Python bindings: Custom callbacks, chat session improvement, refactoring (#1145)
* Added the following features: \n 1) Now prompt_model uses the positional argument callback to return the response tokens. \n 2) Due to the callback argument of prompt_model, prompt_model_streaming only manages the queue and threading now, which reduces duplication of the code. \n 3) Added optional verbose argument to prompt_model which prints out the prompt that is passed to the model. \n 4) Chat sessions can now have a header, i.e. an instruction before the transcript of the conversation. The header is set at the creation of the chat session context. \n 5) generate function now accepts an optional callback. \n 6) When streaming and using chat session, the user doesn't need to save assistant's messages by himself. This is done automatically.

* added _empty_response_callback so I don't have to check if callback is None

* added docs

* now if the callback stop generation, the last token is ignored

* fixed type hints, reimplemented chat session header as a system prompt, minor refactoring, docs: removed section about manual update of chat session for streaming

* forgot to add some type hints!

* keep the config of the model in GPT4All class which is taken from models.json if the download is allowed

* During chat sessions, the model-specific systemPrompt and promptTemplate are applied.

* implemented the changes

* Fixed typing. Now the user can set a prompt template that will be applied even outside of a chat session. The template can also have multiple placeholders that can be filled by passing a dictionary to the generate function

* reversed some changes concerning the prompt templates and their functionality

* fixed some type hints, changed list[float] to List[Float]

* fixed type hints, changed List[Float] to List[float]

* fix typo in the comment: Pepare => Prepare

---------

Signed-off-by: 385olt <385olt@gmail.com>
2023-07-19 18:36:49 -04:00
AMOGUS
5f0aaf8bdb python binding's TopP also needs some love
Changed the Python binding's TopP from 0.1 to 0.4

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
2023-07-19 10:36:23 -04:00
cosmic-snow
2d02c65177
Handle edge cases when generating embeddings (#1215)
* Handle edge cases when generating embeddings
* Improve Python handling & add llmodel_c.h note
- In the Python bindings fail fast with a ValueError when text is empty
- Advice other bindings authors to do likewise in llmodel_c.h
2023-07-17 13:21:03 -07:00
Andriy Mulyar
cfd70b69fc
Update gpt4all_python_embedding.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-14 14:54:56 -04:00
Andriy Mulyar
306105e62f
Update gpt4all_python_embedding.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-14 14:54:36 -04:00
Andriy Mulyar
89e277bb3c
Update gpt4all_python_embedding.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-14 14:30:14 -04:00
Adam Treat
f543affa9a Add better docs and threading support to bert. 2023-07-14 14:14:22 -04:00
Adam Treat
0c0a4f2c22 Add the docs. 2023-07-14 10:48:18 -04:00
Adam Treat
6656f0f41e Fix the test to work and not do timings. 2023-07-14 09:48:57 -04:00
Adam Treat
bb2b82e1b9 Add docs and bump version since we changed python api again. 2023-07-14 09:48:57 -04:00
Aaron Miller
c77ab849c0 LLModel objects should hold a reference to the library
prevents llmodel lib from being gc'd before live model objects
2023-07-14 09:48:57 -04:00
Aaron Miller
936dcd2bfc use default n_threads 2023-07-14 09:48:57 -04:00
Aaron Miller
15f1fe5445 rename embedder 2023-07-14 09:48:57 -04:00
Adam Treat
ee4186d579 Fixup bert python bindings. 2023-07-14 09:48:57 -04:00
Adam Treat
4963db8f43 Bump the version numbers for both python and c backend. 2023-07-13 14:21:46 -04:00
Adam Treat
0efdbfcffe Bert 2023-07-13 14:21:46 -04:00
cosmic-snow
00a945eaee Update gpt4all_faq.md
- Add information about AVX/AVX2.
- Update supported architectures.

Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-07-12 15:19:26 -04:00
cosmic-snow
d611d10747
Update index.md (#1157)
Some minor touch-ups to the documentation landing page.

Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-07-08 17:29:35 -04:00
Aaron Miller
ed470e18b3
python: Only eval latest message in chat sessions (#1149)
* python: Only eval latest message in chat sessions

* python: version bump
2023-07-06 21:02:14 -04:00
Andriy Mulyar
71a7032421
python bindings v1.0.2
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-04 11:24:05 -04:00
Aaron Miller
6987910668
python bindings: typing fixes, misc fixes (#1131)
* python: do not mutate locals()

* python: fix (some) typing complaints

* python: queue sentinel need not be a str

* python: make long inference tests opt in
2023-07-03 21:30:24 -04:00
Andriy Mulyar
01bd3d6802
Python chat streaming (#1127)
* Support streaming in chat session

* Uncommented tests
2023-07-03 12:59:39 -04:00
Andriy Mulyar
aced5e6615
Update README.md to python bindings
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-01 18:52:39 -04:00
Andriy Mulyar
19412cfa5d
Clear chat history between chat sessions (#1116) 2023-06-30 20:50:38 -04:00
Aaron Miller
3599663a22 bindings/python: type assert 2023-06-30 21:07:21 -03:00
Aaron Miller
958c8d4fa5 bindings/python: long input tests 2023-06-30 21:07:21 -03:00
Aaron Miller
6a74e515e1 bindings/python: make target to set up env 2023-06-30 21:07:21 -03:00
Aaron Miller
ac5c8e964f
bindings/python: fix typo (#1111) 2023-06-30 17:00:42 -04:00
Andriy Mulyar
46a0762bd5
Python Bindings: Improved unit tests, documentation and unification of API (#1090)
* Makefiles, black, isort

* Black and isort

* unit tests and generation method

* chat context provider

* context does not reset

* Current state

* Fixup

* Python bindings with unit tests

* GPT4All Python Bindings: chat contexts, tests

* New python bindings and backend fixes

* Black and Isort

* Documentation error

* preserved n_predict for backwords compat with langchain

---------

Co-authored-by: Adam Treat <treat.adam@gmail.com>
2023-06-30 16:02:02 -04:00
Andriy Mulyar
6b8456bf99
Update README.md (#1086)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-28 12:15:05 -04:00
AMOGUS
b8464073b8
Update gpt4all_chat.md (#1050)
* Update gpt4all_chat.md

Cleaned up and made the sideloading part more readable, also moved Replit architecture to supported ones. (+ renamed all "ggML" to "GGML" because who calls it "ggML"??)

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>

* Removed the prefixing part

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>

* Bump version

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-27 10:49:45 -04:00
Aaron Miller
b19a3e5b2c add requiredMem method to llmodel impls
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
cosmic-snow
ee26e8f271
CLI Improvements (#1021)
* Add gpt4all-bindings/cli/README.md

* Unify version information
- Was previously split; base one on the other
- Add VERSION_INFO as the "source of truth":
  - Modelled after sys.version_info.
  - Implemented as a tuple, because it's much easier for (partial)
    programmatic comparison.
- Previous API is kept intact.

* Add gpt4all-bindings/cli/developer_notes.md
- A few notes on what's what, especially regarding docs

* Add gpt4all-bindings/python/docs/gpt4all_cli.md
- The CLI user documentation

* Bump CLI version to 0.3.5

* Finalise docs & add to index.md
- Amend where necessary
- Fix typo in gpt4all_cli.md
- Mention and add link to CLI doc in index.md

* Add docstings to gpt4all-bindings/cli/app.py

* Better 'groovy' link & fix typo
- Documentation: point to the Hugging Face model card for 'groovy'
- Correct typo in app.py
2023-06-23 12:09:31 -07:00
EKal-aa
aed7b43143
set n_threads in GPT4All python bindings (#1042)
* set n_threads in GPT4All

* changed default n_threads to None
2023-06-23 01:16:35 -07:00
Martin Mauch
af28173a25
Parse Org Mode files (#1038) 2023-06-22 09:09:39 -07:00
Richard Guo
a39a897e34 0.3.5 bump 2023-06-20 10:21:51 -04:00
Richard Guo
25ce8c6a1e revert version 2023-06-20 10:21:51 -04:00
Richard Guo
282a3b5498 setup.py update 2023-06-20 10:21:51 -04:00
cosmic-snow
b00ac632e3
Update python/README.md with troubleshooting info (#1012)
- Add some notes about common Windows problems when trying to make a local build (MinGW and MSVC).

Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-06-18 14:08:43 -04:00
standby24x7
cdea838671
Fix spelling typo in gpt4all.py (#1007)
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2023-06-18 14:07:46 -04:00
Richard Guo
a9b33c3d10 update setup.py 2023-06-13 09:07:08 -04:00
Richard Guo
a99cc34efb fix prompt context so it's preserved in class 2023-06-13 09:07:08 -04:00
Richard Guo
5a0b348219 second hold for pypi deploy 2023-06-12 23:11:54 -04:00
Richard Guo
014205a916 dummy python change 2023-06-12 23:11:54 -04:00
Richard Guo
e9449190cd version bump 2023-06-12 17:32:56 -04:00
Aaron Miller
d3ba1295a7
Metal+LLama take two (#929)
Support latest llama with Metal
---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 16:48:46 -04:00
Richard Guo
e0a8480c0e
Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method

* cleanup

* bump version number for clarity

* added replace in decode to avoid unicodedecode exception

* revert back to _build_prompt
2023-06-09 10:17:44 -04:00
Claudius Ellsel
3c1b59f5c6
Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-09 10:13:35 +02:00
Claudius Ellsel
39a7c35d03
Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-08 13:43:31 -04:00
Richard Guo
c4706d0c14
Replit Model (#713)
* porting over replit code model to gpt4all

* replaced memory with kv_self struct

* continuing debug

* welp it built but lot of sus things

* working model loading and somewhat working generate.. need to format response?

* revert back to semi working version

* finally got rid of weird formatting

* figured out problem is with python bindings - this is good to go for testing

* addressing PR feedback

* output refactor

* fixed prompt reponse collection

* cleanup

* addressing PR comments

* building replit backend with new ggmlver code

* chatllm replit and clean python files

* cleanup

* updated replit to match new llmodel api

* match llmodel api and change size_t to Token

* resolve PR comments

* replit model commit comment
2023-06-06 17:09:00 -04:00
Andriy Mulyar
ef35eb496f Supports downloading officially supported models not hosted on gpt4all R2 2023-06-06 16:21:02 -04:00
Andriy Mulyar
266f13aee9
Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 15:41:30 -04:00
Andriy Mulyar
01071efc9c
Documentation for model sideloading (#851)
* Documentation for model sideloading

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Update gpt4all_chat.md

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 12:35:02 -04:00
Richard Guo
f5f9f28f74 updated pypi version 2023-06-05 12:02:25 -04:00
Richard Guo
9d2b20f6cd small typo fix 2023-06-02 12:32:26 -04:00
Richard Guo
e709e58603 more cleanup 2023-06-02 12:32:26 -04:00
Richard Guo
13fc50f2d3 cleanup 2023-06-02 12:32:26 -04:00
Richard Guo
c54c42e3fb fixed finding model libs 2023-06-02 12:32:26 -04:00
Richard Guo
ab56364da8 load libs correctly 2023-06-02 12:32:26 -04:00
Richard Guo
5490af5a2c model creation is failing... debugging 2023-06-02 12:32:26 -04:00
Richard Guo
9f203c211f load all model libs 2023-06-02 12:32:26 -04:00
Richard Guo
ae42805d49 updated bindings code for updated C api 2023-06-02 12:32:26 -04:00
Andriy Mulyar
cf07ca3951
Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-01 15:35:06 -04:00
Andriy Mulyar
fca2578a81
Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* typo

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-01 10:29:29 -04:00
Andriy Mulyar
05d156fb97
Fixed formatting of localdocs docs (#770) 2023-05-30 16:19:48 -04:00
Andriy Mulyar
6ed9c1a8d8
Improved localdocs documentation (#762)
* Improved localdocs documentation

* Improved localdocs documentation

* Improved localdocs documentation

* Improved localdocs documentation
2023-05-30 11:26:34 -04:00
Andriy Mulyar
02290fd881
LocalDocs documentation initial (#761)
* LocalDocs documentation initial
2023-05-30 08:35:26 -04:00
Richard Guo
73db20ba85 hotfix default verbose optioin 2023-05-26 12:49:32 -04:00
Konstantin Gukov
a6f3e94458 one funcion to append .bin suffix 2023-05-26 09:24:03 -04:00
Konstantin Gukov
659244f0a2 Correct indentation of the multiline error message 2023-05-26 09:24:03 -04:00
Konstantin Gukov
5e61008424 Add optional verbosity 2023-05-26 09:24:03 -04:00
Konstantin Gukov
e05ee9466a Correct return type 2023-05-26 09:24:03 -04:00
Konstantin Gukov
dcbdd369ad Redundant else 2023-05-26 09:24:03 -04:00
Konstantin Gukov
ace34afef2 1. Cleanup the interrupted download
2. with-syntax
2023-05-26 09:24:03 -04:00
Konstantin Gukov
8053dc014b less magic number 2023-05-26 09:24:03 -04:00
Konstantin Gukov
e98cfd97b3 convert to f-strings 2023-05-26 09:24:03 -04:00
Konstantin Gukov
2b6fb7b95e reduce nesting, better error reporting 2023-05-26 09:24:03 -04:00
Konstantin Gukov
a067f38544 Concise model matching 2023-05-26 09:24:03 -04:00
Konstantin Gukov
c1f3dd310c Log where the model was found 2023-05-26 09:24:03 -04:00
Konstantin Gukov
f96300534b Nicer handling of missing model directory.
Correct exception message.
2023-05-26 09:24:03 -04:00
Konstantin Gukov
59d7db9aad More precise condition 2023-05-26 09:24:03 -04:00
Konstantin Gukov
adc599b0a6 rm redundant json 2023-05-26 09:24:03 -04:00
Andriy Mulyar
b40cd065e9
Update index.md (#689)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-05-22 17:29:01 -07:00
Andriy Mulyar
efd39b0d73
Improved documentation landing page (#665)
* Better doc landing page

* Typo

* Improved docs landing page
2023-05-21 23:14:18 -04:00
Juuso Alasuutari
08ece43f0d llmodel: fix wrong and/or missing prompt callback type
Fix occurrences of the prompt callback being incorrectly specified, or
the response callback's prototype being incorrectly used in its place.

Signed-off-by: Juuso Alasuutari <juuso.alasuutari@gmail.com>
2023-05-21 16:02:11 -04:00
Richard Guo
213e033540
GPT4All Updated Docs and FAQ (#632)
* working on docs

* more doc organization

* faq

* some reformatting
2023-05-18 16:07:57 -04:00
Richard Guo
8bc9a4ca83 fixed response formatting when streaming 2023-05-18 12:02:11 -04:00
Richard Guo
d1b17e1fb3 updating pip version 2023-05-18 12:02:11 -04:00
Richard Guo
057b9f51bc deploying new version with streaming 2023-05-18 12:02:11 -04:00
drbh
d4861030b7
adds a simple cli chat repl (#566)
* adds a simple cli chat repl

* add n thread support and append assistant response
2023-05-16 16:47:54 -04:00
Richard Guo
e659ef5b2a
Improvements to documentation (#606) 2023-05-16 15:29:27 -04:00
Andriy Mulyar
bc481f2ab7
Chat doc typo (#605)
* Added modal labs example to documentation

* Added gpt4all chat

* Typo

* Andriy can't spell
2023-05-16 14:33:34 -04:00
Andriy Mulyar
5528e37660
Chat doc fixes (#604)
* Added modal labs example to documentation

* Added gpt4all chat

* Typo

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-05-16 14:23:37 -04:00
Andriy Mulyar
96cedc2558
Added better documentation to web server example in docs (#603)
* Added modal labs example to documentation

* Added gpt4all chat
2023-05-16 14:17:35 -04:00
Andriy Mulyar
3b407a3bd1
Update gpt4all_chat.md (#601)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-05-16 13:15:00 -04:00