typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
# GPT4All Node.js API
```sh
2023-11-03 15:21:44 +00:00
yarn add gpt4all@latest
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
2023-11-03 15:21:44 +00:00
npm install gpt4all@latest
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
2023-11-03 15:21:44 +00:00
pnpm install gpt4all@latest
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
```
2023-05-14 17:59:03 +00:00
The original [GPT4All typescript bindings ](https://github.com/nomic-ai/gpt4all-ts ) are now out of date.
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* New bindings created by [jacoobes ](https://github.com/jacoobes ), [limez ](https://github.com/iimez ) and the [nomic ai community ](https://home.nomic.ai ), for all to use.
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
* The nodejs api has made strides to mirror the python api. It is not 100% mirrored, but many pieces of the api resemble its python counterpart.
* Everything should work out the box.
* See [API Reference ](#api-reference )
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
### Chat Completion
2023-05-22 19:55:22 +00:00
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
```js
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
import { createCompletion, loadModel } from '../src/gpt4all.js'
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
const model = await loadModel('ggml-vicuna-7b-1.1-q4_2', { verbose: true });
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
const response = await createCompletion(model, [
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
{ role : 'system', content: 'You are meant to be annoying and unhelpful.' },
{ role : 'user', content: 'What is 1 + 1?' }
]);
```
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
### Embedding
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
```js
import { createEmbedding, loadModel } from '../src/gpt4all.js'
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
const model = await loadModel('ggml-all-MiniLM-L6-v2-f16', { verbose: true });
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
const fltArray = createEmbedding(model, "Pain is inevitable, suffering optional");
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
```
2023-05-22 19:55:22 +00:00
### Build Instructions
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* binding.gyp is compile config
* Tested on Ubuntu. Everything seems to work fine
* Tested on Windows. Everything works fine.
* Sparse testing on mac os.
* MingW works as well to build the gpt4all-backend. **HOWEVER** , this package works only with MSVC built dlls.
2023-05-22 19:55:22 +00:00
### Requirements
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* git
* [node.js >= 18.0.0 ](https://nodejs.org/en )
* [yarn ](https://yarnpkg.com/ )
* [node-gyp ](https://github.com/nodejs/node-gyp )
* all of its requirements.
* (unix) gcc version 12
* (win) msvc version 143
* Can be obtained with visual studio 2022 build tools
* python 3
2023-10-18 16:09:52 +00:00
* On Windows and Linux, building GPT4All requires the complete Vulkan SDK. You may download it from here: https://vulkan.lunarg.com/sdk/home
* macOS users do not need Vulkan, as GPT4All will use Metal instead.
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
### Build (from source)
2023-05-22 19:55:22 +00:00
```sh
git clone https://github.com/nomic-ai/gpt4all.git
cd gpt4all-bindings/typescript
```
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* The below shell commands assume the current working directory is `typescript` .
* To Build and Rebuild:
```sh
yarn
```
* llama.cpp git submodule for gpt4all can be possibly absent. If this is the case, make sure to run in llama.cpp parent directory
```sh
2023-05-28 23:50:45 +00:00
git submodule update --init --depth 1 --recursive
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
```
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
```sh
yarn build:backend
```
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
This will build platform-dependent dynamic libraries, and will be located in runtimes/(platform)/native The only current way to use them is to put them in the current working directory of your application. That is, **WHEREVER YOU RUN YOUR NODE APPLICATION**
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* llama-xxxx.dll is required.
* According to whatever model you are using, you'll need to select the proper model loader.
* For example, if you running an Mosaic MPT model, you will need to select the mpt-(buildvariant).(dynamiclibrary)
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
2023-05-22 19:55:22 +00:00
### Test
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
2023-05-22 19:55:22 +00:00
```sh
yarn test
```
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
2023-05-22 19:55:22 +00:00
### Source Overview
#### src/
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* Extra functions to help aid devex
* Typings for the native node addon
* the javascript interface
2023-05-22 19:55:22 +00:00
#### test/
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* simple unit testings for some functions exported.
* more advanced ai testing is not handled
2023-05-22 19:55:22 +00:00
#### spec/
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* Average look and feel of the api
* Should work assuming a model and libraries are installed locally in working directory
2023-05-22 19:55:22 +00:00
#### index.cc
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* The bridge between nodejs and c. Where the bindings are.
#### prompt.cc
* Handling prompting and inference of models in a threadsafe, asynchronous way.
### Known Issues
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
* why your model may be spewing bull 💩
* The downloaded model is broken (just reinstall or download from official site)
* That's it so far
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
### Roadmap
2023-05-22 19:55:22 +00:00
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
This package is in active development, and breaking changes may happen until the api stabilizes. Here's what's the todo list:
2023-05-22 19:55:22 +00:00
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* \[x] prompt models via a threadsafe function in order to have proper non blocking behavior in nodejs
* \[ ] ~~createTokenStream, an async iterator that streams each token emitted from the model. Planning on following this [example](https://github.com/nodejs/node-addon-examples/tree/main/threadsafe-async-iterator)~~ May not implement unless someone else can complete
* \[x] proper unit testing (integrate with circle ci)
* \[x] publish to npm under alpha tag `gpt4all@alpha`
* \[x] have more people test on other platforms (mac tester needed)
* \[x] switch to new pluggable backend
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
* \[ ] NPM bundle size reduction via optionalDependencies strategy (need help)
* Should include prebuilds to avoid painful node-gyp errors
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
* \[ ] createChatSession ( the python equivalent to create\_chat\_session )
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
### API Reference
<!-- Generated by documentation.js. Update this documentation by updating the source code. -->
##### Table of Contents
* [ModelType ](#modeltype )
* [ModelFile ](#modelfile )
* [gptj ](#gptj )
* [llama ](#llama )
* [mpt ](#mpt )
* [replit ](#replit )
* [type ](#type )
* [LLModel ](#llmodel )
* [constructor ](#constructor )
* [Parameters ](#parameters )
* [type ](#type-1 )
* [name ](#name )
* [stateSize ](#statesize )
* [threadCount ](#threadcount )
* [setThreadCount ](#setthreadcount )
* [Parameters ](#parameters-1 )
* [raw\_prompt ](#raw_prompt )
* [Parameters ](#parameters-2 )
* [embed ](#embed )
* [Parameters ](#parameters-3 )
* [isModelLoaded ](#ismodelloaded )
* [setLibraryPath ](#setlibrarypath )
* [Parameters ](#parameters-4 )
* [getLibraryPath ](#getlibrarypath )
* [loadModel ](#loadmodel )
* [Parameters ](#parameters-5 )
* [createCompletion ](#createcompletion )
* [Parameters ](#parameters-6 )
* [createEmbedding ](#createembedding )
* [Parameters ](#parameters-7 )
* [CompletionOptions ](#completionoptions )
* [verbose ](#verbose )
* [systemPromptTemplate ](#systemprompttemplate )
* [promptTemplate ](#prompttemplate )
* [promptHeader ](#promptheader )
* [promptFooter ](#promptfooter )
* [PromptMessage ](#promptmessage )
* [role ](#role )
* [content ](#content )
* [prompt\_tokens ](#prompt_tokens )
* [completion\_tokens ](#completion_tokens )
* [total\_tokens ](#total_tokens )
* [CompletionReturn ](#completionreturn )
* [model ](#model )
* [usage ](#usage )
* [choices ](#choices )
* [CompletionChoice ](#completionchoice )
* [message ](#message )
* [LLModelPromptContext ](#llmodelpromptcontext )
* [logitsSize ](#logitssize )
* [tokensSize ](#tokenssize )
* [nPast ](#npast )
* [nCtx ](#nctx )
* [nPredict ](#npredict )
* [topK ](#topk )
* [topP ](#topp )
* [temp ](#temp )
* [nBatch ](#nbatch )
* [repeatPenalty ](#repeatpenalty )
* [repeatLastN ](#repeatlastn )
* [contextErase ](#contexterase )
* [createTokenStream ](#createtokenstream )
* [Parameters ](#parameters-8 )
* [DEFAULT\_DIRECTORY ](#default_directory )
* [DEFAULT\_LIBRARIES\_DIRECTORY ](#default_libraries_directory )
* [DEFAULT\_MODEL\_CONFIG ](#default_model_config )
* [DEFAULT\_PROMT\_CONTEXT ](#default_promt_context )
* [DEFAULT\_MODEL\_LIST\_URL ](#default_model_list_url )
* [downloadModel ](#downloadmodel )
* [Parameters ](#parameters-9 )
* [Examples ](#examples )
* [DownloadModelOptions ](#downloadmodeloptions )
* [modelPath ](#modelpath )
* [verbose ](#verbose-1 )
* [url ](#url )
* [md5sum ](#md5sum )
* [DownloadController ](#downloadcontroller )
* [cancel ](#cancel )
* [promise ](#promise )
#### ModelType
Type of the model
Type: (`"gptj"` | `"llama"` | `"mpt"` | `"replit"` )
#### ModelFile
Full list of models available
@deprecated These model names are outdated and this type will not be maintained, please use a string literal instead
##### gptj
List of GPT-J Models
Type: (`"ggml-gpt4all-j-v1.3-groovy.bin"` | `"ggml-gpt4all-j-v1.2-jazzy.bin"` | `"ggml-gpt4all-j-v1.1-breezy.bin"` | `"ggml-gpt4all-j.bin"` )
##### llama
List Llama Models
Type: (`"ggml-gpt4all-l13b-snoozy.bin"` | `"ggml-vicuna-7b-1.1-q4_2.bin"` | `"ggml-vicuna-13b-1.1-q4_2.bin"` | `"ggml-wizardLM-7B.q4_2.bin"` | `"ggml-stable-vicuna-13B.q4_2.bin"` | `"ggml-nous-gpt4-vicuna-13b.bin"` | `"ggml-v3-13b-hermes-q5_1.bin"` )
##### mpt
List of MPT Models
Type: (`"ggml-mpt-7b-base.bin"` | `"ggml-mpt-7b-chat.bin"` | `"ggml-mpt-7b-instruct.bin"` )
##### replit
List of Replit Models
Type: `"ggml-replit-code-v1-3b.bin"`
#### type
Model architecture. This argument currently does not have any functionality and is just used as descriptive identifier for user.
Type: [ModelType ](#modeltype )
#### LLModel
LLModel class representing a language model.
This is a base class that provides common functionality for different types of language models.
##### constructor
Initialize a new LLModel.
###### Parameters
* `path` ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** Absolute path to the model file.
<!-- -->
* Throws ** [Error ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error )** If the model file does not exist.
##### type
either 'gpt', mpt', or 'llama' or undefined
Returns ** ([ModelType](#modeltype) | [undefined ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/undefined ))** 
##### name
The name of the model.
Returns ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** 
##### stateSize
Get the size of the internal state of the model.
NOTE: This state data is specific to the type of model you have created.
Returns ** [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )** the size in bytes of the internal state of the model
##### threadCount
Get the number of threads used for model inference.
The default is the number of physical cores your computer has.
Returns ** [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )** The number of threads used for model inference.
##### setThreadCount
Set the number of threads used for model inference.
###### Parameters
* `newNumber` ** [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )** The new number of threads.
Returns **void**  
##### raw\_prompt
Prompt the model with a given input and optional parameters.
This is the raw output from model.
Use the prompt function exported for a value
###### Parameters
* `q` ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** The prompt input.
* `params` **Partial<[LLModelPromptContext](#llmodelpromptcontext)>** Optional parameters for the prompt context.
* `callback` **function (res: [string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)): void**  
Returns **void** The result of the model prompt.
##### embed
Embed text with the model. Keep in mind that
not all models can embed text, (only bert can embed as of 07/16/2023 (mm/dd/yyyy))
Use the prompt function exported for a value
###### Parameters
* `text` ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** 
* `q` The prompt input.
* `params` Optional parameters for the prompt context.
Returns ** [Float32Array ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Float32Array )** The result of the model prompt.
##### isModelLoaded
Whether the model is loaded or not.
Returns ** [boolean ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean )** 
##### setLibraryPath
Where to search for the pluggable backend libraries
###### Parameters
* `s` ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** 
Returns **void**  
##### getLibraryPath
Where to get the pluggable backend libraries
Returns ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** 
#### loadModel
Loads a machine learning model with the specified name. The defacto way to create a model.
By default this will download a model from the official GPT4ALL website, if a model is not present at given path.
##### Parameters
* `modelName` ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** The name of the model to load.
* `options` ** (LoadModelOptions | [undefined ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/undefined ))?** (Optional) Additional options for loading the model.
Returns ** [Promise ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise )< (InferenceModel | EmbeddingModel)>** A promise that resolves to an instance of the loaded LLModel.
#### createCompletion
The nodejs equivalent to python binding's chat\_completion
##### Parameters
* `model` **InferenceModel** The language model object.
* `messages` ** [Array ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array )< [PromptMessage](#promptmessage)>** The array of messages for the conversation.
* `options` ** [CompletionOptions ](#completionoptions )** The options for creating the completion.
Returns ** [CompletionReturn ](#completionreturn )** The completion result.
#### createEmbedding
The nodejs moral equivalent to python binding's Embed4All().embed()
meow
##### Parameters
* `model` **EmbeddingModel** The language model object.
* `text` ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** text to embed
Returns ** [Float32Array ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Float32Array )** The completion result.
#### CompletionOptions
**Extends Partial\<LLModelPromptContext>**
The options for creating the completion.
##### verbose
Indicates if verbose logging is enabled.
Type: [boolean ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean )
##### systemPromptTemplate
Template for the system message. Will be put before the conversation with %1 being replaced by all system messages.
Note that if this is not defined, system messages will not be included in the prompt.
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
##### promptTemplate
Template for user messages, with %1 being replaced by the message.
Type: [boolean ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean )
##### promptHeader
The initial instruction for the model, on top of the prompt
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
##### promptFooter
The last instruction for the model, appended to the end of the prompt.
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
#### PromptMessage
A message in the conversation, identical to OpenAI's chat message.
##### role
The role of the message.
Type: (`"system"` | `"assistant"` | `"user"` )
##### content
The message content.
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
#### prompt\_tokens
The number of tokens used in the prompt.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
#### completion\_tokens
The number of tokens used in the completion.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
#### total\_tokens
The total number of tokens used.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
#### CompletionReturn
The result of the completion, similar to OpenAI's format.
##### model
The model used for the completion.
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
##### usage
Token usage report.
Type: {prompt\_tokens: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number ), completion\_tokens: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number ), total\_tokens: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )}
##### choices
The generated completions.
Type: [Array ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array )< [CompletionChoice](#completionchoice)>
#### CompletionChoice
A completion choice, similar to OpenAI's format.
##### message
Response message
Type: [PromptMessage ](#promptmessage )
#### LLModelPromptContext
Model inference arguments for generating completions.
##### logitsSize
The size of the raw logits vector.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### tokensSize
The size of the raw tokens vector.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### nPast
The number of tokens in the past conversation.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### nCtx
The number of tokens possible in the context window.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### nPredict
The number of tokens to predict.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### topK
The top-k logits to sample from.
Top-K sampling selects the next token only from the top K most likely tokens predicted by the model.
It helps reduce the risk of generating low-probability or nonsensical tokens, but it may also limit
the diversity of the output. A higher value for top-K (eg., 100) will consider more tokens and lead
to more diverse text, while a lower value (eg., 10) will focus on the most probable tokens and generate
more conservative text. 30 - 60 is a good range for most tasks.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### topP
The nucleus sampling probability threshold.
Top-P limits the selection of the next token to a subset of tokens with a cumulative probability
above a threshold P. This method, also known as nucleus sampling, finds a balance between diversity
and quality by considering both token probabilities and the number of tokens available for sampling.
When using a higher value for top-P (eg., 0.95), the generated text becomes more diverse.
On the other hand, a lower value (eg., 0.1) produces more focused and conservative text.
The default value is 0.4, which is aimed to be the middle ground between focus and diversity, but
for more creative tasks a higher top-p value will be beneficial, about 0.5-0.9 is a good range for that.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### temp
The temperature to adjust the model's output distribution.
Temperature is like a knob that adjusts how creative or focused the output becomes. Higher temperatures
(eg., 1.2) increase randomness, resulting in more imaginative and diverse text. Lower temperatures (eg., 0.5)
make the output more focused, predictable, and conservative. When the temperature is set to 0, the output
becomes completely deterministic, always selecting the most probable next token and producing identical results
each time. A safe range would be around 0.6 - 0.85, but you are free to search what value fits best for you.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### nBatch
The number of predictions to generate in parallel.
By splitting the prompt every N tokens, prompt-batch-size reduces RAM usage during processing. However,
this can increase the processing time as a trade-off. If the N value is set too low (e.g., 10), long prompts
with 500+ tokens will be most affected, requiring numerous processing runs to complete the prompt processing.
To ensure optimal performance, setting the prompt-batch-size to 2048 allows processing of all tokens in a single run.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### repeatPenalty
The penalty factor for repeated tokens.
Repeat-penalty can help penalize tokens based on how frequently they occur in the text, including the input prompt.
A token that has already appeared five times is penalized more heavily than a token that has appeared only one time.
A value of 1 means that there is no penalty and values larger than 1 discourage repeated tokens.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### repeatLastN
The number of last tokens to penalize.
The repeat-penalty-tokens N option controls the number of tokens in the history to consider for penalizing repetition.
A larger value will look further back in the generated text to prevent repetitions, while a smaller value will only
consider recent tokens.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
##### contextErase
The percentage of context to erase if the context window is exceeded.
Type: [number ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number )
#### createTokenStream
TODO: Help wanted to implement this
##### Parameters
* `llmodel` ** [LLModel ](#llmodel )** 
* `messages` ** [Array ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array )< [PromptMessage](#promptmessage)>** 
* `options` ** [CompletionOptions ](#completionoptions )** 
Returns **function (ll: [LLModel](#llmodel)): AsyncGenerator<[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)>**  
#### DEFAULT\_DIRECTORY
From python api:
models will be stored in (homedir)/.cache/gpt4all/\`
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
#### DEFAULT\_LIBRARIES\_DIRECTORY
From python api:
The default path for dynamic libraries to be stored.
You may separate paths by a semicolon to search in multiple areas.
This searches DEFAULT\_DIRECTORY/libraries, cwd/libraries, and finally cwd.
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
#### DEFAULT\_MODEL\_CONFIG
Default model configuration.
Type: ModelConfig
#### DEFAULT\_PROMT\_CONTEXT
Default prompt context.
Type: [LLModelPromptContext ](#llmodelpromptcontext )
#### DEFAULT\_MODEL\_LIST\_URL
Default model list url.
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
#### downloadModel
Initiates the download of a model file.
By default this downloads without waiting. use the controller returned to alter this behavior.
##### Parameters
* `modelName` ** [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )** The model to be downloaded.
* `options` **DownloadOptions** to pass into the downloader. Default is { location: (cwd), verbose: false }.
##### Examples
```javascript
const download = downloadModel('ggml-gpt4all-j-v1.3-groovy.bin')
download.promise.then(() => console.log('Downloaded!'))
```
* Throws ** [Error ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error )** If the model already exists in the specified location.
* Throws ** [Error ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error )** If the model cannot be found at the specified url.
Returns ** [DownloadController ](#downloadcontroller )** object that allows controlling the download process.
#### DownloadModelOptions
Options for the model download process.
##### modelPath
location to download the model.
Default is process.cwd(), or the current working directory
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
##### verbose
Debug mode -- check how long it took to download in seconds
Type: [boolean ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean )
##### url
2023-10-19 19:25:37 +00:00
Remote download url. Defaults to `https://gpt4all.io/models/gguf/<modelName>`
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 16:45:45 +00:00
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
##### md5sum
MD5 sum of the model file. If this is provided, the downloaded file will be checked against this sum.
If the sums do not match, an error will be thrown and the file will be deleted.
Type: [string ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String )
#### DownloadController
Model download controller.
##### cancel
Cancel the request to download if this is called.
Type: function (): void
##### promise
A promise resolving to the downloaded models config once the download is done
Type: [Promise ](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise )\<ModelConfig>