Commit Graph

12 Commits (fc1a2813811acac3e02f619ced9d350470a4e934)

Author SHA1 Message Date
Jared Van Bortel 4fc4d94be4
fix chat-style prompt templates (#1970)
Also use a new version of Mistral OpenOrca.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
7 months ago
cebtenzzre 37b007603a
bindings: replace references to GGMLv3 models with GGUF (#1547) 11 months ago
Adam Treat ea66669cef Switch to new models2.json for new gguf release and bump our version to
2.5.0.
12 months ago
Cosmic Snow 55f96aacc6 Move FAQ entries to general FAQ and adjust, plus minor improvements 1 year ago
Cosmic Snow 19d6460282 Extend & Update Python documentation
- Expand Quickstart
  - Add Examples & Explanations:
    - Info on generation parameters
    - Model folder examples
    - Templates
    - Introspection with logging
    - Notes on allow_download=False
    - Interrupting generation (response callback)
    - FAQ
1 year ago
385olt b4dbbd1485
Python bindings: Custom callbacks, chat session improvement, refactoring (#1145)
* Added the following features: \n 1) Now prompt_model uses the positional argument callback to return the response tokens. \n 2) Due to the callback argument of prompt_model, prompt_model_streaming only manages the queue and threading now, which reduces duplication of the code. \n 3) Added optional verbose argument to prompt_model which prints out the prompt that is passed to the model. \n 4) Chat sessions can now have a header, i.e. an instruction before the transcript of the conversation. The header is set at the creation of the chat session context. \n 5) generate function now accepts an optional callback. \n 6) When streaming and using chat session, the user doesn't need to save assistant's messages by himself. This is done automatically.

* added _empty_response_callback so I don't have to check if callback is None

* added docs

* now if the callback stop generation, the last token is ignored

* fixed type hints, reimplemented chat session header as a system prompt, minor refactoring, docs: removed section about manual update of chat session for streaming

* forgot to add some type hints!

* keep the config of the model in GPT4All class which is taken from models.json if the download is allowed

* During chat sessions, the model-specific systemPrompt and promptTemplate are applied.

* implemented the changes

* Fixed typing. Now the user can set a prompt template that will be applied even outside of a chat session. The template can also have multiple placeholders that can be filled by passing a dictionary to the generate function

* reversed some changes concerning the prompt templates and their functionality

* fixed some type hints, changed list[float] to List[Float]

* fixed type hints, changed List[Float] to List[float]

* fix typo in the comment: Pepare => Prepare

---------

Signed-off-by: 385olt <385olt@gmail.com>
1 year ago
Adam Treat f543affa9a Add better docs and threading support to bert. 1 year ago
Adam Treat 0c0a4f2c22 Add the docs. 1 year ago
Andriy Mulyar 01bd3d6802
Python chat streaming (#1127)
* Support streaming in chat session

* Uncommented tests
1 year ago
Andriy Mulyar 46a0762bd5
Python Bindings: Improved unit tests, documentation and unification of API (#1090)
* Makefiles, black, isort

* Black and isort

* unit tests and generation method

* chat context provider

* context does not reset

* Current state

* Fixup

* Python bindings with unit tests

* GPT4All Python Bindings: chat contexts, tests

* New python bindings and backend fixes

* Black and Isort

* Documentation error

* preserved n_predict for backwords compat with langchain

---------

Co-authored-by: Adam Treat <treat.adam@gmail.com>
1 year ago
Richard Guo 213e033540
GPT4All Updated Docs and FAQ (#632)
* working on docs

* more doc organization

* faq

* some reformatting
1 year ago
Andriy Mulyar 17de7f0529
Chat Client Documentation (#596)
* GPT4All Chat Client Documentation

* Updated documentation wording
1 year ago