typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
|
|
|
#include "llmodel.h"
|
2024-03-28 16:08:23 +00:00
|
|
|
#include "llmodel_c.h"
|
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
|
|
|
#include "prompt.h"
|
|
|
|
#include <atomic>
|
|
|
|
#include <filesystem>
|
2024-03-28 16:08:23 +00:00
|
|
|
#include <iostream>
|
|
|
|
#include <memory>
|
2024-02-24 22:50:14 +00:00
|
|
|
#include <mutex>
|
2024-03-28 16:08:23 +00:00
|
|
|
#include <napi.h>
|
|
|
|
#include <set>
|
2024-02-24 22:50:14 +00:00
|
|
|
|
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
|
|
|
namespace fs = std::filesystem;
|
|
|
|
|
2024-03-28 16:08:23 +00:00
|
|
|
class NodeModelWrapper : public Napi::ObjectWrap<NodeModelWrapper>
|
|
|
|
{
|
|
|
|
|
|
|
|
public:
|
|
|
|
NodeModelWrapper(const Napi::CallbackInfo &);
|
2024-06-03 21:25:28 +00:00
|
|
|
// virtual ~NodeModelWrapper();
|
|
|
|
Napi::Value GetType(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value IsModelLoaded(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value StateSize(const Napi::CallbackInfo &info);
|
|
|
|
// void Finalize(Napi::Env env) override;
|
2024-03-28 16:08:23 +00:00
|
|
|
/**
|
|
|
|
* Prompting the model. This entails spawning a new thread and adding the response tokens
|
|
|
|
* into a thread local string variable.
|
|
|
|
*/
|
|
|
|
Napi::Value Infer(const Napi::CallbackInfo &info);
|
2024-06-03 16:12:55 +00:00
|
|
|
void SetThreadCount(const Napi::CallbackInfo &info);
|
2024-06-03 21:25:28 +00:00
|
|
|
void Dispose(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value GetName(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value ThreadCount(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value GenerateEmbedding(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value HasGpuDevice(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value ListGpus(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value InitGpuByString(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value GetRequiredMemory(const Napi::CallbackInfo &info);
|
|
|
|
Napi::Value GetGpuDevices(const Napi::CallbackInfo &info);
|
2024-03-28 16:08:23 +00:00
|
|
|
/*
|
|
|
|
* The path that is used to search for the dynamic libraries
|
|
|
|
*/
|
|
|
|
Napi::Value GetLibraryPath(const Napi::CallbackInfo &info);
|
|
|
|
/**
|
|
|
|
* Creates the LLModel class
|
|
|
|
*/
|
|
|
|
static Napi::Function GetClass(Napi::Env);
|
|
|
|
llmodel_model GetInference();
|
vulkan support for typescript bindings, gguf support (#1390)
* adding some native methods to cpp wrapper
* gpu seems to work
* typings and add availibleGpus method
* fix spelling
* fix syntax
* more
* normalize methods to conform to py
* remove extra dynamic linker deps when building with vulkan
* bump python version (library linking fix)
* Don't link against libvulkan.
* vulkan python bindings on windows fixes
* Bring the vulkan backend to the GUI.
* When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU.
* Show the device we're currently using.
* Fix up the name and formatting.
* init at most one vulkan device, submodule update
fixes issues w/ multiple of the same gpu
* Update the submodule.
* Add version 2.4.15 and bump the version number.
* Fix a bug where we're not properly falling back to CPU.
* Sync to a newer version of llama.cpp with bugfix for vulkan.
* Report the actual device we're using.
* Only show GPU when we're actually using it.
* Bump to new llama with new bugfix.
* Release notes for v2.4.16 and bump the version.
* Fallback to CPU more robustly.
* Release notes for v2.4.17 and bump the version.
* Bump the Python version to python-v1.0.12 to restrict the quants that vulkan recognizes.
* Link against ggml in bin so we can get the available devices without loading a model.
* Send actual and requested device info for those who have opt-in.
* Actually bump the version.
* Release notes for v2.4.18 and bump the version.
* Fix for crashes on systems where vulkan is not installed properly.
* Release notes for v2.4.19 and bump the version.
* fix typings and vulkan build works on win
* Add flatpak manifest
* Remove unnecessary stuffs from manifest
* Update to 2.4.19
* appdata: update software description
* Latest rebase on llama.cpp with gguf support.
* macos build fixes
* llamamodel: metal supports all quantization types now
* gpt4all.py: GGUF
* pyllmodel: print specific error message
* backend: port BERT to GGUF
* backend: port MPT to GGUF
* backend: port Replit to GGUF
* backend: use gguf branch of llama.cpp-mainline
* backend: use llamamodel.cpp for StarCoder
* conversion scripts: cleanup
* convert scripts: load model as late as possible
* convert_mpt_hf_to_gguf.py: better tokenizer decoding
* backend: use llamamodel.cpp for Falcon
* convert scripts: make them directly executable
* fix references to removed model types
* modellist: fix the system prompt
* backend: port GPT-J to GGUF
* gpt-j: update inference to match latest llama.cpp insights
- Use F16 KV cache
- Store transposed V in the cache
- Avoid unnecessary Q copy
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78
* chatllm: grammar fix
* convert scripts: use bytes_to_unicode from transformers
* convert scripts: make gptj script executable
* convert scripts: add feed-forward length for better compatiblilty
This GGUF key is used by all llama.cpp models with upstream support.
* gptj: remove unused variables
* Refactor for subgroups on mat * vec kernel.
* Add q6_k kernels for vulkan.
* python binding: print debug message to stderr
* Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf.
* Bump to the latest fixes for vulkan in llama.
* llamamodel: fix static vector in LLamaModel::endTokens
* Switch to new models2.json for new gguf release and bump our version to
2.5.0.
* Bump to latest llama/gguf branch.
* chat: report reason for fallback to CPU
* chat: make sure to clear fallback reason on success
* more accurate fallback descriptions
* differentiate between init failure and unsupported models
* backend: do not use Vulkan with non-LLaMA models
* Add q8_0 kernels to kompute shaders and bump to latest llama/gguf.
* backend: fix build with Visual Studio generator
Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This
is needed because Visual Studio is a multi-configuration generator, so
we do not know what the build type will be until `cmake --build` is
called.
Fixes #1470
* remove old llama.cpp submodules
* Reorder and refresh our models2.json.
* rebase on newer llama.cpp
* python/embed4all: use gguf model, allow passing kwargs/overriding model
* Add starcoder, rift and sbert to our models2.json.
* Push a new version number for llmodel backend now that it is based on gguf.
* fix stray comma in models2.json
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
* Speculative fix for build on mac.
* chat: clearer CPU fallback messages
* Fix crasher with an empty string for prompt template.
* Update the language here to avoid misunderstanding.
* added EM German Mistral Model
* make codespell happy
* issue template: remove "Related Components" section
* cmake: install the GPT-J plugin (#1487)
* Do not delete saved chats if we fail to serialize properly.
* Restore state from text if necessary.
* Another codespell attempted fix.
* llmodel: do not call magic_match unless build variant is correct (#1488)
* chatllm: do not write uninitialized data to stream (#1486)
* mat*mat for q4_0, q8_0
* do not process prompts on gpu yet
* python: support Path in GPT4All.__init__ (#1462)
* llmodel: print an error if the CPU does not support AVX (#1499)
* python bindings should be quiet by default
* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
nonempty
* make verbose flag for retrieve_model default false (but also be
overridable via gpt4all constructor)
should be able to run a basic test:
```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```
and see no non-model output when successful
* python: always check status code of HTTP responses (#1502)
* Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
* Update README.md
Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
* fix embed4all filename
https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
* Improves Java API signatures maintaining back compatibility
* python: replace deprecated pkg_resources with importlib (#1505)
* Updated chat wishlist (#1351)
* q6k, q4_1 mat*mat
* update mini-orca 3b to gguf2, license
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
* convert scripts: fix AutoConfig typo (#1512)
* publish config https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig (#1375)
merge into my branch
* fix appendBin
* fix gpu not initializing first
* sync up
* progress, still wip on destructor
* some detection work
* untested dispose method
* add js side of dispose
* Update gpt4all-bindings/typescript/index.cc
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update gpt4all-bindings/typescript/index.cc
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update gpt4all-bindings/typescript/index.cc
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update gpt4all-bindings/typescript/src/gpt4all.d.ts
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update gpt4all-bindings/typescript/src/gpt4all.js
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update gpt4all-bindings/typescript/src/util.js
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix tests
* fix circleci for nodejs
* bump version
---------
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Akarshan Biswas <akarshan.biswas@gmail.com>
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
Co-authored-by: Jan Philipp Harries <jpdus@users.noreply.github.com>
Co-authored-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
Co-authored-by: Alex Soto <asotobu@gmail.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-11-01 19:38:58 +00:00
|
|
|
|
2024-03-28 16:08:23 +00:00
|
|
|
private:
|
|
|
|
/**
|
|
|
|
* The underlying inference that interfaces with the C interface
|
|
|
|
*/
|
|
|
|
llmodel_model inference_;
|
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
|
|
|
|
2024-03-28 16:08:23 +00:00
|
|
|
std::mutex inference_mutex;
|
2024-02-24 22:50:14 +00:00
|
|
|
|
2024-06-03 21:25:28 +00:00
|
|
|
std::string type;
|
|
|
|
// corresponds to LLModel::name() in typescript
|
|
|
|
std::string name;
|
|
|
|
int nCtx{};
|
|
|
|
int nGpuLayers{};
|
|
|
|
std::string full_model_path;
|
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 19:00:20 +00:00
|
|
|
};
|