Commit Graph

196 Commits

Author SHA1 Message Date
mvenditto
d3831f7dbe first attempt to store test results 2023-07-11 18:09:39 -04:00
mvenditto
5fe4f25d64 fix curr working directory 2023-07-11 18:09:39 -04:00
mvenditto
7c67134b8c try to sort out ci only error on build related to CA2101 2023-07-11 18:09:39 -04:00
mvenditto
2cbe791e5c add a SkipOnCI trait fore tests 2023-07-11 18:09:39 -04:00
felix
6630bf2f13 update to 2.4.11 gpt4all
falcon model support.
Developer docs included for Java.
2023-07-11 12:43:44 -04:00
cosmic-snow
d611d10747
Update index.md (#1157)
Some minor touch-ups to the documentation landing page.

Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-07-08 17:29:35 -04:00
Aaron Miller
ed470e18b3
python: Only eval latest message in chat sessions (#1149)
* python: Only eval latest message in chat sessions

* python: version bump
2023-07-06 21:02:14 -04:00
cosmic-snow
affd0af51f
Fix CLI to work with 1.x.y version of the Python bindings (#1120)
* Fix CLI to work with 1.x.y version of the Python bindings (tentative)
- Adapted to bindings API changes
- Version selection based on package information
- Does not currently work with 1.x.y however, as it's not fully implemented:
  "NotImplementedError: Streaming tokens in a chat session is not currently supported."

* Adapt to the completed streaming API with session support

* Bump CLI version to 1.0.2
2023-07-05 22:42:15 -04:00
felix
8dcf68dbf4 Add note about running in Docker containers 2023-07-05 16:33:11 -04:00
felix
77f435a77e Put singing plugin under seperate profile. 2023-07-05 16:33:11 -04:00
felix
4e274baee1 bump version a few more doc fixes.
add macos metal files
Add check for Prompt is too long.
add logging statement for gpt4all version of the binding
add version string, readme update
Add unit tests for Java code of the java bindings.
2023-07-05 16:33:11 -04:00
Andriy Mulyar
71a7032421
python bindings v1.0.2
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-04 11:24:05 -04:00
Aaron Miller
6987910668
python bindings: typing fixes, misc fixes (#1131)
* python: do not mutate locals()

* python: fix (some) typing complaints

* python: queue sentinel need not be a str

* python: make long inference tests opt in
2023-07-03 21:30:24 -04:00
Andriy Mulyar
01bd3d6802
Python chat streaming (#1127)
* Support streaming in chat session

* Uncommented tests
2023-07-03 12:59:39 -04:00
Andriy Mulyar
aced5e6615
Update README.md to python bindings
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-01 18:52:39 -04:00
Andriy Mulyar
19412cfa5d
Clear chat history between chat sessions (#1116) 2023-06-30 20:50:38 -04:00
Aaron Miller
3599663a22 bindings/python: type assert 2023-06-30 21:07:21 -03:00
Aaron Miller
958c8d4fa5 bindings/python: long input tests 2023-06-30 21:07:21 -03:00
Aaron Miller
6a74e515e1 bindings/python: make target to set up env 2023-06-30 21:07:21 -03:00
Aaron Miller
ac5c8e964f
bindings/python: fix typo (#1111) 2023-06-30 17:00:42 -04:00
Andriy Mulyar
46a0762bd5
Python Bindings: Improved unit tests, documentation and unification of API (#1090)
* Makefiles, black, isort

* Black and isort

* unit tests and generation method

* chat context provider

* context does not reset

* Current state

* Fixup

* Python bindings with unit tests

* GPT4All Python Bindings: chat contexts, tests

* New python bindings and backend fixes

* Black and Isort

* Documentation error

* preserved n_predict for backwords compat with langchain

---------

Co-authored-by: Adam Treat <treat.adam@gmail.com>
2023-06-30 16:02:02 -04:00
Andriy Mulyar
6b8456bf99
Update README.md (#1086)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-28 12:15:05 -04:00
AMOGUS
b8464073b8
Update gpt4all_chat.md (#1050)
* Update gpt4all_chat.md

Cleaned up and made the sideloading part more readable, also moved Replit architecture to supported ones. (+ renamed all "ggML" to "GGML" because who calls it "ggML"??)

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>

* Removed the prefixing part

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>

* Bump version

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-27 10:49:45 -04:00
Aaron Miller
b19a3e5b2c add requiredMem method to llmodel impls
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
cosmic-snow
ee26e8f271
CLI Improvements (#1021)
* Add gpt4all-bindings/cli/README.md

* Unify version information
- Was previously split; base one on the other
- Add VERSION_INFO as the "source of truth":
  - Modelled after sys.version_info.
  - Implemented as a tuple, because it's much easier for (partial)
    programmatic comparison.
- Previous API is kept intact.

* Add gpt4all-bindings/cli/developer_notes.md
- A few notes on what's what, especially regarding docs

* Add gpt4all-bindings/python/docs/gpt4all_cli.md
- The CLI user documentation

* Bump CLI version to 0.3.5

* Finalise docs & add to index.md
- Amend where necessary
- Fix typo in gpt4all_cli.md
- Mention and add link to CLI doc in index.md

* Add docstings to gpt4all-bindings/cli/app.py

* Better 'groovy' link & fix typo
- Documentation: point to the Hugging Face model card for 'groovy'
- Correct typo in app.py
2023-06-23 12:09:31 -07:00
EKal-aa
aed7b43143
set n_threads in GPT4All python bindings (#1042)
* set n_threads in GPT4All

* changed default n_threads to None
2023-06-23 01:16:35 -07:00
Michael Mior
ae3d91476c
Improve grammar in Java bindings README (#1045)
Signed-off-by: Michael Mior <michael.mior@gmail.com>
2023-06-22 18:49:58 -04:00
Martin Mauch
af28173a25
Parse Org Mode files (#1038) 2023-06-22 09:09:39 -07:00
Richard Guo
a39a897e34 0.3.5 bump 2023-06-20 10:21:51 -04:00
Richard Guo
25ce8c6a1e revert version 2023-06-20 10:21:51 -04:00
Richard Guo
282a3b5498 setup.py update 2023-06-20 10:21:51 -04:00
cosmic-snow
b00ac632e3
Update python/README.md with troubleshooting info (#1012)
- Add some notes about common Windows problems when trying to make a local build (MinGW and MSVC).

Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-06-18 14:08:43 -04:00
standby24x7
cdea838671
Fix spelling typo in gpt4all.py (#1007)
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2023-06-18 14:07:46 -04:00
cosmic-snow
b66d0b4fff
Fix CLI app.py (#910)
- the bindings API changed in 057b9, but the CLI was not updated
- change 'std_passthrough' param to the renamed 'streaming'
- remove '_cli_override_response_callback' as it breaks and is no longer needed
- bump version to 0.3.4
2023-06-16 16:06:22 -04:00
Ettore Di Giacinto
b004c53a7b
Allow to set a SetLibrarySearchPath in the golang bindings (#981)
This is used to identify the path where all the various implementations
are
2023-06-14 16:27:19 +02:00
Richard Guo
a9b33c3d10 update setup.py 2023-06-13 09:07:08 -04:00
Richard Guo
a99cc34efb fix prompt context so it's preserved in class 2023-06-13 09:07:08 -04:00
Tim Miller
797891c995
Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader

* Load library as part of Model factory

* Dynamically search and find the dlls

* Update tests to use locally built runtimes

* Fix dylib loading, add macos runtime support for sample/tests

* Bypass automatic loading by default.

* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile

* Switch Loading again

* Update build scripts for mac/linux

* Update bindings to support newest breaking changes

* Fix build

* Use llmodel for Windows

* Actually, it does need to be libllmodel

* Name

* Remove TFMs, bypass loading by default

* Fix script

* Delete mac script

---------

Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-06-13 14:05:34 +02:00
Felix Zaslavskiy
726dcbd43d
Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc

---------

Co-authored-by: felix <felix@zaslavskiy.net>
2023-06-13 00:53:27 -07:00
Richard Guo
5a0b348219 second hold for pypi deploy 2023-06-12 23:11:54 -04:00
Richard Guo
014205a916 dummy python change 2023-06-12 23:11:54 -04:00
Richard Guo
e9449190cd version bump 2023-06-12 17:32:56 -04:00
Jacob Nguyen
8d53614444
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed

* Small cleanups for settings dialog.

* Fix the build.

* localdocs

* Fixup the rescan. Fix debug output.

* Add remove folder implementation.

* Remove this signal as unnecessary for now.

* Cleanup of the database, better chunking, better matching.

* Add new reverse prompt for new localdocs context feature.

* Add a new muted text color.

* Turn off the debugging messages by default.

* Add prompt processing and localdocs to the busy indicator in UI.

* Specify a large number of suffixes we will search for now.

* Add a collection list to support a UI.

* Add a localdocs tab.

* Start fleshing out the localdocs ui.

* Begin implementing the localdocs ui in earnest.

* Clean up the settings dialog for localdocs a bit.

* Add more of the UI for selecting collections for chats.

* Complete the settings for localdocs.

* Adds the collections to serialize and implement references for localdocs.

* Store the references separately so they are not sent to datalake.

* Add context link to references.

* Don't use the full path in reference text.

* Various fixes to remove unnecessary warnings.

* Add a newline

* ignore rider and vscode dirs

* create test project and basic model loading tests

* make sample print usage and cleaner

* Get the backend as well as the client building/working with msvc.

* Libraries named differently on msvc.

* Bump the version number.

* This time remember to bump the version right after a release.

* rm redundant json

* More precise condition

* Nicer handling of missing model directory.
Correct exception message.

* Log where the model was found

* Concise model matching

* reduce nesting, better error reporting

* convert to f-strings

* less magic number

* 1. Cleanup the interrupted download
2. with-syntax

* Redundant else

* Do not ignore explicitly passed 4 threads

* Correct return type

* Add optional verbosity

* Correct indentation of the multiline error message

* one funcion to append .bin suffix

* hotfix default verbose optioin

* export hidden types and fix prompt() type

* tiny typo (#739)

* Update README.md (#738)

* Update README.md

fix golang gpt4all import path

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>

* Update README.md

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>

---------

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>

* fix(training instructions): model repo name (#728)

Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>

* C# Bindings - Prompt formatting (#712)

* Added support for custom prompt formatting

* more docs added

* bump version

* clean up cc files and revert things

* LocalDocs documentation initial (#761)

* LocalDocs documentation initial

* Improved localdocs documentation (#762)

* Improved localdocs documentation

* Improved localdocs documentation

* Improved localdocs documentation

* Improved localdocs documentation

* New tokenizer implementation for MPT and GPT-J

Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.

Featuring:
 * Fixed unicode handling (via ICU)
 * Fixed BPE token merge handling
 * Complete added vocabulary handling

* buf_ref.into() can be const now

* add tokenizer readme w/ instructions for convert script

* Revert "add tokenizer readme w/ instructions for convert script"

This reverts commit 9c15d1f83e.

* Revert "buf_ref.into() can be const now"

This reverts commit 840e011b75.

* Revert "New tokenizer implementation for MPT and GPT-J"

This reverts commit ee3469ba6c.

* Fix remove model from model download for regular models.

* Fixed formatting of localdocs docs (#770)

* construct and return the correct reponse when the request is a chat completion

* chore: update typings to keep consistent with python api

* progress, updating createCompletion to mirror py api

* update spec, unfinished backend

* prebuild binaries for package distribution using prebuildify/node-gyp-build

* Get rid of blocking behavior for regenerate response.

* Add a label to the model loading visual indicator.

* Use the new MyButton for the regenerate response button.

* Add a hover and pressed to the visual indication of MyButton.

* Fix wording of this accessible description.

* Some color and theme enhancements to make the UI contrast a bit better.

* Make the comboboxes align in UI.

* chore: update namespace and fix prompt bug

* fix linux build

* add roadmap

* Fix offset of prompt/response icons for smaller text.

* Dlopen backend 5 (#779)

Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.

* Add a custom busy indicator to further align look and feel across platforms.

* Draw the indicator for combobox to ensure it looks the same on all platforms.

* Fix warning.

* Use the proper text color for sending messages.

* Fixup the plus new chat button.

* Make all the toolbuttons highlight on hover.

* Advanced avxonly autodetection (#744)

* Advanced avxonly requirement detection

* chore: support llamaversion >= 3 and ggml default

* Dlopen better implementation management (Version 2)

* Add fixme's and clean up a bit.

* Documentation improvements on LocalDocs (#790)

* Update gpt4all_chat.md

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* typo

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Adapt code

* Makefile changes (WIP to test)

* Debug

* Adapt makefile

* Style

* Implemented logging mechanism (#785)

* Cleaned up implementation management (#787)

* Cleaned up implementation management

* Initialize LLModel::m_implementation to nullptr

* llmodel.h: Moved dlhandle fwd declare above LLModel class

* Fix compile

* Fixed double-free in LLModel::Implementation destructor

* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)

* Drop leftover include

* Add ldl in gpt4all.go for dynamic linking (#797)

* Logger should also output to stderr

* Fix MSVC Build, Update C# Binding Scripts

* Update gpt4all_chat.md (#800)

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* C# Bindings - improved logging (#714)

* added optional support for .NET logging

* bump version and add missing alpha suffix

* avoid creating additional namespace for extensions

* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors

---------

Signed-off-by: mvenditto <venditto.matteo@gmail.com>

* Make localdocs work with server mode.

* Better name for database results.

* Fix for stale references after we regenerate.

* Don't hardcode these.

* Fix bug with resetting context with chatgpt model.

* Trying to shrink the copy+paste code and do more code sharing between backend model impl.

* Remove this as it is no longer useful.

* Try and fix build on mac.

* Fix mac build again.

* Add models/release.json to github repo to allow PRs

* Fixed spelling error in models.json

to make CI happy

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* updated bindings code for updated C api

* load all model libs

* model creation is failing... debugging

* load libs correctly

* fixed finding model libs

* cleanup

* cleanup

* more cleanup

* small typo fix

* updated binding.gyp

* Fixed model type for GPT-J (#815)

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* Fixed tons of warnings and clazy findings (#811)

* Some tweaks to UI to make window resizing smooth and flow nicely.

* Min constraints on about dialog.

* Prevent flashing of white on resize.

* Actually use the theme dark color for window background.

* Add the ability to change the directory via text field not just 'browse' button.

* add scripts to build dlls

* markdown doc gen

* add scripts, nearly done moving breaking changes

* merge with main

* oops, fixed comment

* more meaningful name

* leave for testing

* Only default mlock on macOS where swap seems to be a problem

Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by 9c6c09cbd2

Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>

* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.

* some tweaks to optional types and defaults

* mingw script for windows compilation

* Update README.md

huggingface -> Hugging Face

Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Backend prompt dedup (#822)

* Deduplicated prompt() function code

* Better error handling when the model fails to load.

* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)

* Update build_and_run.md (#834)

Signed-off-by: AT <manyoso@users.noreply.github.com>

* Trying out a new feature to download directly from huggingface.

* Try again with the url.

* Allow for download of models hosted on third party hosts.

* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.

* Update to latest llama.cpp

* Remove older models that are not as popular. (#837)

* Remove older models that are not as popular.

* Update models.json

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Update models.json (#838)

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Update models.json

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* feat: finalyl compiled on windows (MSVC) goadman

* update README and spec and promisfy createCompletion

* update d.ts

* Make installers work with mac/windows for big backend change.

* Need this so the linux installer packages it as a dependency.

* Try and fix mac.

* Fix compile on mac.

* These need to be installed for them to be packaged and work for both mac and windows.

* Fix installers for windows and linux.

* Fix symbol resolution on windows.

* updated pypi version

* Release notes for version 2.4.5 (#853)

* Update README.md (#854)

Signed-off-by: AT <manyoso@users.noreply.github.com>

* Documentation for model sideloading (#851)

* Documentation for model sideloading

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Update gpt4all_chat.md

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Speculative fix for windows llama models with installer.

* Revert "Speculative fix for windows llama models with installer."

This reverts commit add725d1eb.

* Revert "Fix bug with resetting context with chatgpt model." (#859)

This reverts commit e0dcf6a14f.

* Fix llama models on linux and windows.

* Bump the version.

* New release notes

* Set thread counts after loading model (#836)

* Update gpt4all_faq.md (#861)

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Supports downloading officially supported models not hosted on gpt4all R2

* Replit Model (#713)

* porting over replit code model to gpt4all

* replaced memory with kv_self struct

* continuing debug

* welp it built but lot of sus things

* working model loading and somewhat working generate.. need to format response?

* revert back to semi working version

* finally got rid of weird formatting

* figured out problem is with python bindings - this is good to go for testing

* addressing PR feedback

* output refactor

* fixed prompt reponse collection

* cleanup

* addressing PR comments

* building replit backend with new ggmlver code

* chatllm replit and clean python files

* cleanup

* updated replit to match new llmodel api

* match llmodel api and change size_t to Token

* resolve PR comments

* replit model commit comment

* Synced llama.cpp.cmake with upstream (#887)

* Fix for windows.

* fix: build script

* Revert "Synced llama.cpp.cmake with upstream (#887)"

This reverts commit 5c5e10c1f5.

* Update README.md (#906)

Add PyPI link and add clickable, more specific link to documentation

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>

* Update CollectionsDialog.qml (#856)

Phrasing for localdocs

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* sampling: remove incorrect offset for n_vocab (#900)

no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)

* non-llama: explicitly greedy sampling for temp<=0 (#901)

copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output

* work on thread safety and cleaning up, adding object option

* chore: cleanup tests and spec

* refactor for object based startup

* more docs

* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.

* more docs

* Synced llama.cpp.cmake with upstream

* add lock file to ignore codespell

* Move usage in Python bindings readme to own section (#907)

Have own section for short usage example, as it is not specific to local build

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>

* Always sync for circleci.

* update models json with replit model

* Forgot to bump.

* Change the default values for generation in GUI

* Removed double-static from variables in replit.cpp

The anonymous namespace already makes it static.

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* Generator in Python Bindings - streaming yields tokens at a time (#895)

* generator method

* cleanup

* bump version number for clarity

* added replace in decode to avoid unicodedecode exception

* revert back to _build_prompt

* Do auto detection by default in C++ API

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* remove comment

* add comments for index.h

* chore: add new models and edit ignore files and documentation

* llama on Metal (#885)

Support latest llama with Metal

---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>

* Revert "llama on Metal (#885)"

This reverts commit b59ce1c6e7.

* add more readme stuff and debug info

* spell

* Metal+LLama take two (#929)

Support latest llama with Metal
---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>

* add prebuilts for windows

* Add new solution for context links that does not force regular markdown (#938)

in responses which is disruptive to code completions in responses.

* add prettier

* split out non llm related methods into util.js, add listModels method

* add prebuild script for creating all platforms bindings at once

* check in prebuild linux/so libs and allow distribution of napi prebuilds

* apply autoformatter

* move constants in config.js, add loadModel and retrieveModel methods

* Clean up the context links a bit.

* Don't interfere with selection.

* Add code blocks and python syntax highlighting.

* Spelling error.

* Add c++/c highighting support.

* Fix some bugs with bash syntax and add some C23 keywords.

* Bugfixes for prompt syntax highlighting.

* Try and fix a false positive from codespell.

* When recalculating context we can't erase the BOS.

* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5ed
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`

* remove .so unneeded path

---------

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 15:00:20 -04:00
Felix Zaslavskiy
44bf91855d
Initial 1.0.0 Java-Bindings PR/release (#805)
* Initial 1.0.0 Java-Bindings PR/release

* Initial 1.1.0 Java-Bindings PR/release

* Add debug ability

* 1.1.2  release

---------

Co-authored-by: felix <felix@zaslavskiy.net>
2023-06-12 14:58:06 -04:00
Juuso Alasuutari
5cfb1bda89
llmodel: add model wrapper destructor, fix mem leak in golang bindings (#862)
Signed-off-by: Juuso Alasuutari <juuso.alasuutari@gmail.com>
2023-06-12 09:41:22 -07:00
Aaron Miller
d3ba1295a7
Metal+LLama take two (#929)
Support latest llama with Metal
---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 16:48:46 -04:00
Richard Guo
e0a8480c0e
Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method

* cleanup

* bump version number for clarity

* added replace in decode to avoid unicodedecode exception

* revert back to _build_prompt
2023-06-09 10:17:44 -04:00
Claudius Ellsel
3c1b59f5c6
Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-09 10:13:35 +02:00
Claudius Ellsel
39a7c35d03
Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-08 13:43:31 -04:00
Richard Guo
c4706d0c14
Replit Model (#713)
* porting over replit code model to gpt4all

* replaced memory with kv_self struct

* continuing debug

* welp it built but lot of sus things

* working model loading and somewhat working generate.. need to format response?

* revert back to semi working version

* finally got rid of weird formatting

* figured out problem is with python bindings - this is good to go for testing

* addressing PR feedback

* output refactor

* fixed prompt reponse collection

* cleanup

* addressing PR comments

* building replit backend with new ggmlver code

* chatllm replit and clean python files

* cleanup

* updated replit to match new llmodel api

* match llmodel api and change size_t to Token

* resolve PR comments

* replit model commit comment
2023-06-06 17:09:00 -04:00
Andriy Mulyar
ef35eb496f Supports downloading officially supported models not hosted on gpt4all R2 2023-06-06 16:21:02 -04:00
Andriy Mulyar
266f13aee9
Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 15:41:30 -04:00
Ettore Di Giacinto
44dc1ade62
Set thread counts after loading model (#836) 2023-06-05 21:35:40 +02:00
Andriy Mulyar
01071efc9c
Documentation for model sideloading (#851)
* Documentation for model sideloading

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Update gpt4all_chat.md

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 12:35:02 -04:00
Richard Guo
f5f9f28f74 updated pypi version 2023-06-05 12:02:25 -04:00
Richard Guo
9d2b20f6cd small typo fix 2023-06-02 12:32:26 -04:00
Richard Guo
e709e58603 more cleanup 2023-06-02 12:32:26 -04:00
Richard Guo
13fc50f2d3 cleanup 2023-06-02 12:32:26 -04:00
Richard Guo
c54c42e3fb fixed finding model libs 2023-06-02 12:32:26 -04:00
Richard Guo
ab56364da8 load libs correctly 2023-06-02 12:32:26 -04:00
Richard Guo
5490af5a2c model creation is failing... debugging 2023-06-02 12:32:26 -04:00
Richard Guo
9f203c211f load all model libs 2023-06-02 12:32:26 -04:00
Richard Guo
ae42805d49 updated bindings code for updated C api 2023-06-02 12:32:26 -04:00
mvenditto
8e89ceb54b
C# Bindings - improved logging (#714)
* added optional support for .NET logging

* bump version and add missing alpha suffix

* avoid creating additional namespace for extensions

* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors

---------

Signed-off-by: mvenditto <venditto.matteo@gmail.com>
2023-06-01 21:01:27 +01:00
Andriy Mulyar
cf07ca3951
Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-01 15:35:06 -04:00
Tim Miller
87cb3505d3 Fix MSVC Build, Update C# Binding Scripts 2023-06-01 14:24:23 -04:00
Ettore Di Giacinto
022f1cabe7
Add ldl in gpt4all.go for dynamic linking (#797) 2023-06-01 19:50:08 +02:00
mudler
682a383e06 Drop leftover include 2023-06-01 13:03:44 -04:00
mudler
243c762411 Style 2023-06-01 10:36:22 -04:00
mudler
5220356273 Adapt makefile 2023-06-01 10:36:22 -04:00
mudler
19dd6c7635 Debug 2023-06-01 10:36:22 -04:00
mudler
7c7864ac72 Makefile changes (WIP to test) 2023-06-01 10:36:22 -04:00
mudler
79cef86bec Adapt code 2023-06-01 10:36:22 -04:00
Andriy Mulyar
fca2578a81
Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* typo

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-01 10:29:29 -04:00
Andriy Mulyar
05d156fb97
Fixed formatting of localdocs docs (#770) 2023-05-30 16:19:48 -04:00
Andriy Mulyar
6ed9c1a8d8
Improved localdocs documentation (#762)
* Improved localdocs documentation

* Improved localdocs documentation

* Improved localdocs documentation

* Improved localdocs documentation
2023-05-30 11:26:34 -04:00
Andriy Mulyar
02290fd881
LocalDocs documentation initial (#761)
* LocalDocs documentation initial
2023-05-30 08:35:26 -04:00
mvenditto
9eb81cb549
C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting

* more docs added

* bump version
2023-05-28 19:57:00 -04:00
Nandakumar
d101ca06d4
Update README.md (#738)
* Update README.md

fix golang gpt4all import path

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>

* Update README.md

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>

---------

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
2023-05-28 19:51:11 -04:00
Joseph Mearman
020f64b9a4
tiny typo (#739) 2023-05-28 19:50:45 -04:00
Richard Guo
73db20ba85 hotfix default verbose optioin 2023-05-26 12:49:32 -04:00
Konstantin Gukov
a6f3e94458 one funcion to append .bin suffix 2023-05-26 09:24:03 -04:00
Konstantin Gukov
659244f0a2 Correct indentation of the multiline error message 2023-05-26 09:24:03 -04:00
Konstantin Gukov
5e61008424 Add optional verbosity 2023-05-26 09:24:03 -04:00
Konstantin Gukov
e05ee9466a Correct return type 2023-05-26 09:24:03 -04:00
Konstantin Gukov
100c809f1e Do not ignore explicitly passed 4 threads 2023-05-26 09:24:03 -04:00
Konstantin Gukov
dcbdd369ad Redundant else 2023-05-26 09:24:03 -04:00
Konstantin Gukov
ace34afef2 1. Cleanup the interrupted download
2. with-syntax
2023-05-26 09:24:03 -04:00
Konstantin Gukov
8053dc014b less magic number 2023-05-26 09:24:03 -04:00
Konstantin Gukov
e98cfd97b3 convert to f-strings 2023-05-26 09:24:03 -04:00
Konstantin Gukov
2b6fb7b95e reduce nesting, better error reporting 2023-05-26 09:24:03 -04:00
Konstantin Gukov
a067f38544 Concise model matching 2023-05-26 09:24:03 -04:00
Konstantin Gukov
c1f3dd310c Log where the model was found 2023-05-26 09:24:03 -04:00
Konstantin Gukov
f96300534b Nicer handling of missing model directory.
Correct exception message.
2023-05-26 09:24:03 -04:00
Konstantin Gukov
59d7db9aad More precise condition 2023-05-26 09:24:03 -04:00
Konstantin Gukov
adc599b0a6 rm redundant json 2023-05-26 09:24:03 -04:00
redthing1
63f57635d8 make sample print usage and cleaner 2023-05-25 11:34:21 -04:00
redthing1
dec8546abe create test project and basic model loading tests 2023-05-25 11:34:07 -04:00
redthing1
0cc86d19be ignore rider and vscode dirs 2023-05-25 11:34:07 -04:00
Andriy Mulyar
b40cd065e9
Update index.md (#689)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-05-22 17:29:01 -07:00