Commit Graph

1090 Commits

Author SHA1 Message Date
Tim Miller
797891c995
Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader

* Load library as part of Model factory

* Dynamically search and find the dlls

* Update tests to use locally built runtimes

* Fix dylib loading, add macos runtime support for sample/tests

* Bypass automatic loading by default.

* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile

* Switch Loading again

* Update build scripts for mac/linux

* Update bindings to support newest breaking changes

* Fix build

* Use llmodel for Windows

* Actually, it does need to be libllmodel

* Name

* Remove TFMs, bypass loading by default

* Fix script

* Delete mac script

---------

Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-06-13 14:05:34 +02:00
Aaron Miller
88616fde7f
llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
2023-06-13 07:14:02 -04:00
Felix Zaslavskiy
726dcbd43d
Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc

---------

Co-authored-by: felix <felix@zaslavskiy.net>
2023-06-13 00:53:27 -07:00
Richard Guo
a9bea27537 bring back when condition 2023-06-12 23:11:54 -04:00
Richard Guo
28d1e37d15 let me deploy pls 2023-06-12 23:11:54 -04:00
Richard Guo
5a0b348219 second hold for pypi deploy 2023-06-12 23:11:54 -04:00
Richard Guo
e5cb9f6b7b no need for main 2023-06-12 23:11:54 -04:00
Richard Guo
014205a916 dummy python change 2023-06-12 23:11:54 -04:00
Richard Guo
dcd2bbae9d python workflows in circleci 2023-06-12 23:11:54 -04:00
Richard Guo
e9449190cd version bump 2023-06-12 17:32:56 -04:00
Adam Treat
84deebd223 Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! 2023-06-12 17:08:55 -04:00
Jacob Nguyen
8d53614444
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed

* Small cleanups for settings dialog.

* Fix the build.

* localdocs

* Fixup the rescan. Fix debug output.

* Add remove folder implementation.

* Remove this signal as unnecessary for now.

* Cleanup of the database, better chunking, better matching.

* Add new reverse prompt for new localdocs context feature.

* Add a new muted text color.

* Turn off the debugging messages by default.

* Add prompt processing and localdocs to the busy indicator in UI.

* Specify a large number of suffixes we will search for now.

* Add a collection list to support a UI.

* Add a localdocs tab.

* Start fleshing out the localdocs ui.

* Begin implementing the localdocs ui in earnest.

* Clean up the settings dialog for localdocs a bit.

* Add more of the UI for selecting collections for chats.

* Complete the settings for localdocs.

* Adds the collections to serialize and implement references for localdocs.

* Store the references separately so they are not sent to datalake.

* Add context link to references.

* Don't use the full path in reference text.

* Various fixes to remove unnecessary warnings.

* Add a newline

* ignore rider and vscode dirs

* create test project and basic model loading tests

* make sample print usage and cleaner

* Get the backend as well as the client building/working with msvc.

* Libraries named differently on msvc.

* Bump the version number.

* This time remember to bump the version right after a release.

* rm redundant json

* More precise condition

* Nicer handling of missing model directory.
Correct exception message.

* Log where the model was found

* Concise model matching

* reduce nesting, better error reporting

* convert to f-strings

* less magic number

* 1. Cleanup the interrupted download
2. with-syntax

* Redundant else

* Do not ignore explicitly passed 4 threads

* Correct return type

* Add optional verbosity

* Correct indentation of the multiline error message

* one funcion to append .bin suffix

* hotfix default verbose optioin

* export hidden types and fix prompt() type

* tiny typo (#739)

* Update README.md (#738)

* Update README.md

fix golang gpt4all import path

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>

* Update README.md

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>

---------

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>

* fix(training instructions): model repo name (#728)

Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>

* C# Bindings - Prompt formatting (#712)

* Added support for custom prompt formatting

* more docs added

* bump version

* clean up cc files and revert things

* LocalDocs documentation initial (#761)

* LocalDocs documentation initial

* Improved localdocs documentation (#762)

* Improved localdocs documentation

* Improved localdocs documentation

* Improved localdocs documentation

* Improved localdocs documentation

* New tokenizer implementation for MPT and GPT-J

Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.

Featuring:
 * Fixed unicode handling (via ICU)
 * Fixed BPE token merge handling
 * Complete added vocabulary handling

* buf_ref.into() can be const now

* add tokenizer readme w/ instructions for convert script

* Revert "add tokenizer readme w/ instructions for convert script"

This reverts commit 9c15d1f83e.

* Revert "buf_ref.into() can be const now"

This reverts commit 840e011b75.

* Revert "New tokenizer implementation for MPT and GPT-J"

This reverts commit ee3469ba6c.

* Fix remove model from model download for regular models.

* Fixed formatting of localdocs docs (#770)

* construct and return the correct reponse when the request is a chat completion

* chore: update typings to keep consistent with python api

* progress, updating createCompletion to mirror py api

* update spec, unfinished backend

* prebuild binaries for package distribution using prebuildify/node-gyp-build

* Get rid of blocking behavior for regenerate response.

* Add a label to the model loading visual indicator.

* Use the new MyButton for the regenerate response button.

* Add a hover and pressed to the visual indication of MyButton.

* Fix wording of this accessible description.

* Some color and theme enhancements to make the UI contrast a bit better.

* Make the comboboxes align in UI.

* chore: update namespace and fix prompt bug

* fix linux build

* add roadmap

* Fix offset of prompt/response icons for smaller text.

* Dlopen backend 5 (#779)

Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.

* Add a custom busy indicator to further align look and feel across platforms.

* Draw the indicator for combobox to ensure it looks the same on all platforms.

* Fix warning.

* Use the proper text color for sending messages.

* Fixup the plus new chat button.

* Make all the toolbuttons highlight on hover.

* Advanced avxonly autodetection (#744)

* Advanced avxonly requirement detection

* chore: support llamaversion >= 3 and ggml default

* Dlopen better implementation management (Version 2)

* Add fixme's and clean up a bit.

* Documentation improvements on LocalDocs (#790)

* Update gpt4all_chat.md

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* typo

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Adapt code

* Makefile changes (WIP to test)

* Debug

* Adapt makefile

* Style

* Implemented logging mechanism (#785)

* Cleaned up implementation management (#787)

* Cleaned up implementation management

* Initialize LLModel::m_implementation to nullptr

* llmodel.h: Moved dlhandle fwd declare above LLModel class

* Fix compile

* Fixed double-free in LLModel::Implementation destructor

* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)

* Drop leftover include

* Add ldl in gpt4all.go for dynamic linking (#797)

* Logger should also output to stderr

* Fix MSVC Build, Update C# Binding Scripts

* Update gpt4all_chat.md (#800)

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* C# Bindings - improved logging (#714)

* added optional support for .NET logging

* bump version and add missing alpha suffix

* avoid creating additional namespace for extensions

* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors

---------

Signed-off-by: mvenditto <venditto.matteo@gmail.com>

* Make localdocs work with server mode.

* Better name for database results.

* Fix for stale references after we regenerate.

* Don't hardcode these.

* Fix bug with resetting context with chatgpt model.

* Trying to shrink the copy+paste code and do more code sharing between backend model impl.

* Remove this as it is no longer useful.

* Try and fix build on mac.

* Fix mac build again.

* Add models/release.json to github repo to allow PRs

* Fixed spelling error in models.json

to make CI happy

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* updated bindings code for updated C api

* load all model libs

* model creation is failing... debugging

* load libs correctly

* fixed finding model libs

* cleanup

* cleanup

* more cleanup

* small typo fix

* updated binding.gyp

* Fixed model type for GPT-J (#815)

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* Fixed tons of warnings and clazy findings (#811)

* Some tweaks to UI to make window resizing smooth and flow nicely.

* Min constraints on about dialog.

* Prevent flashing of white on resize.

* Actually use the theme dark color for window background.

* Add the ability to change the directory via text field not just 'browse' button.

* add scripts to build dlls

* markdown doc gen

* add scripts, nearly done moving breaking changes

* merge with main

* oops, fixed comment

* more meaningful name

* leave for testing

* Only default mlock on macOS where swap seems to be a problem

Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by 9c6c09cbd2

Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>

* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.

* some tweaks to optional types and defaults

* mingw script for windows compilation

* Update README.md

huggingface -> Hugging Face

Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>

* Backend prompt dedup (#822)

* Deduplicated prompt() function code

* Better error handling when the model fails to load.

* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)

* Update build_and_run.md (#834)

Signed-off-by: AT <manyoso@users.noreply.github.com>

* Trying out a new feature to download directly from huggingface.

* Try again with the url.

* Allow for download of models hosted on third party hosts.

* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.

* Update to latest llama.cpp

* Remove older models that are not as popular. (#837)

* Remove older models that are not as popular.

* Update models.json

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Update models.json (#838)

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Update models.json

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* feat: finalyl compiled on windows (MSVC) goadman

* update README and spec and promisfy createCompletion

* update d.ts

* Make installers work with mac/windows for big backend change.

* Need this so the linux installer packages it as a dependency.

* Try and fix mac.

* Fix compile on mac.

* These need to be installed for them to be packaged and work for both mac and windows.

* Fix installers for windows and linux.

* Fix symbol resolution on windows.

* updated pypi version

* Release notes for version 2.4.5 (#853)

* Update README.md (#854)

Signed-off-by: AT <manyoso@users.noreply.github.com>

* Documentation for model sideloading (#851)

* Documentation for model sideloading

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Update gpt4all_chat.md

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Speculative fix for windows llama models with installer.

* Revert "Speculative fix for windows llama models with installer."

This reverts commit add725d1eb.

* Revert "Fix bug with resetting context with chatgpt model." (#859)

This reverts commit e0dcf6a14f.

* Fix llama models on linux and windows.

* Bump the version.

* New release notes

* Set thread counts after loading model (#836)

* Update gpt4all_faq.md (#861)

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* Supports downloading officially supported models not hosted on gpt4all R2

* Replit Model (#713)

* porting over replit code model to gpt4all

* replaced memory with kv_self struct

* continuing debug

* welp it built but lot of sus things

* working model loading and somewhat working generate.. need to format response?

* revert back to semi working version

* finally got rid of weird formatting

* figured out problem is with python bindings - this is good to go for testing

* addressing PR feedback

* output refactor

* fixed prompt reponse collection

* cleanup

* addressing PR comments

* building replit backend with new ggmlver code

* chatllm replit and clean python files

* cleanup

* updated replit to match new llmodel api

* match llmodel api and change size_t to Token

* resolve PR comments

* replit model commit comment

* Synced llama.cpp.cmake with upstream (#887)

* Fix for windows.

* fix: build script

* Revert "Synced llama.cpp.cmake with upstream (#887)"

This reverts commit 5c5e10c1f5.

* Update README.md (#906)

Add PyPI link and add clickable, more specific link to documentation

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>

* Update CollectionsDialog.qml (#856)

Phrasing for localdocs

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

* sampling: remove incorrect offset for n_vocab (#900)

no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)

* non-llama: explicitly greedy sampling for temp<=0 (#901)

copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output

* work on thread safety and cleaning up, adding object option

* chore: cleanup tests and spec

* refactor for object based startup

* more docs

* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.

* more docs

* Synced llama.cpp.cmake with upstream

* add lock file to ignore codespell

* Move usage in Python bindings readme to own section (#907)

Have own section for short usage example, as it is not specific to local build

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>

* Always sync for circleci.

* update models json with replit model

* Forgot to bump.

* Change the default values for generation in GUI

* Removed double-static from variables in replit.cpp

The anonymous namespace already makes it static.

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* Generator in Python Bindings - streaming yields tokens at a time (#895)

* generator method

* cleanup

* bump version number for clarity

* added replace in decode to avoid unicodedecode exception

* revert back to _build_prompt

* Do auto detection by default in C++ API

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* remove comment

* add comments for index.h

* chore: add new models and edit ignore files and documentation

* llama on Metal (#885)

Support latest llama with Metal

---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>

* Revert "llama on Metal (#885)"

This reverts commit b59ce1c6e7.

* add more readme stuff and debug info

* spell

* Metal+LLama take two (#929)

Support latest llama with Metal
---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>

* add prebuilts for windows

* Add new solution for context links that does not force regular markdown (#938)

in responses which is disruptive to code completions in responses.

* add prettier

* split out non llm related methods into util.js, add listModels method

* add prebuild script for creating all platforms bindings at once

* check in prebuild linux/so libs and allow distribution of napi prebuilds

* apply autoformatter

* move constants in config.js, add loadModel and retrieveModel methods

* Clean up the context links a bit.

* Don't interfere with selection.

* Add code blocks and python syntax highlighting.

* Spelling error.

* Add c++/c highighting support.

* Fix some bugs with bash syntax and add some C23 keywords.

* Bugfixes for prompt syntax highlighting.

* Try and fix a false positive from codespell.

* When recalculating context we can't erase the BOS.

* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5ed
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`

* remove .so unneeded path

---------

Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 15:00:20 -04:00
Felix Zaslavskiy
44bf91855d
Initial 1.0.0 Java-Bindings PR/release (#805)
* Initial 1.0.0 Java-Bindings PR/release

* Initial 1.1.0 Java-Bindings PR/release

* Add debug ability

* 1.1.2  release

---------

Co-authored-by: felix <felix@zaslavskiy.net>
2023-06-12 14:58:06 -04:00
Juuso Alasuutari
5cfb1bda89
llmodel: add model wrapper destructor, fix mem leak in golang bindings (#862)
Signed-off-by: Juuso Alasuutari <juuso.alasuutari@gmail.com>
2023-06-12 09:41:22 -07:00
Cosmic Snow
ae4a275bcd Fix Windows MSVC AVX builds
- bug introduced in 0cb2b86730
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
2023-06-12 08:55:55 -07:00
Adam Treat
b906fb4057 When recalculating context we can't erase the BOS. 2023-06-12 08:43:20 -07:00
Adam Treat
0ae026e197 Try and fix a false positive from codespell. 2023-06-12 06:39:55 -07:00
Adam Treat
68ff7001ad Bugfixes for prompt syntax highlighting. 2023-06-12 05:55:14 -07:00
Adam Treat
60d95cdd9b Fix some bugs with bash syntax and add some C23 keywords. 2023-06-12 05:08:18 -07:00
Adam Treat
e986f18904 Add c++/c highighting support. 2023-06-12 05:08:18 -07:00
Adam Treat
ae46234261 Spelling error. 2023-06-11 14:20:05 -07:00
Adam Treat
318c51c141 Add code blocks and python syntax highlighting. 2023-06-11 14:20:05 -07:00
Adam Treat
b67cba19f0 Don't interfere with selection. 2023-06-11 14:20:05 -07:00
Adam Treat
50c5b82e57 Clean up the context links a bit. 2023-06-11 14:20:05 -07:00
AT
a9c2f47303
Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
2023-06-10 10:15:38 -04:00
Aaron Miller
d3ba1295a7
Metal+LLama take two (#929)
Support latest llama with Metal
---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 16:48:46 -04:00
Adam Treat
b162b5c64e Revert "llama on Metal (#885)"
This reverts commit c55f81b860.
2023-06-09 15:08:46 -04:00
Aaron Miller
c55f81b860
llama on Metal (#885)
Support latest llama with Metal

---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 14:58:12 -04:00
niansa/tuxifan
14e9ccbc6a Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 17:01:19 +02:00
Richard Guo
e0a8480c0e
Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method

* cleanup

* bump version number for clarity

* added replace in decode to avoid unicodedecode exception

* revert back to _build_prompt
2023-06-09 10:17:44 -04:00
niansa/tuxifan
f03da8d732 Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 08:55:15 -04:00
pingpongching
0d0fae0ca8 Change the default values for generation in GUI 2023-06-09 08:51:09 -04:00
Adam Treat
8fb73c2114 Forgot to bump. 2023-06-09 08:45:31 -04:00
Richard Guo
be2310322f update models json with replit model 2023-06-09 08:44:46 -04:00
Adam Treat
f2387d6f77 Always sync for circleci. 2023-06-09 08:42:49 -04:00
Claudius Ellsel
3c1b59f5c6
Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-09 10:13:35 +02:00
niansa
0cb2b86730 Synced llama.cpp.cmake with upstream 2023-06-08 18:21:32 -04:00
Adam Treat
343a6a308f Circleci builds for Linux, Windows, and macOS for gpt4all-chat. 2023-06-08 18:02:44 -04:00
Aaron Miller
47fbc0e309
non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
2023-06-08 11:08:30 -07:00
Aaron Miller
b14953e136
sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
2023-06-08 11:08:10 -07:00
Andriy Mulyar
eb26293205
Update CollectionsDialog.qml (#856)
Phrasing for localdocs

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-08 13:44:17 -04:00
Claudius Ellsel
39a7c35d03
Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation

Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-08 13:43:31 -04:00
Adam Treat
010a04d96f Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 89910c7ca8.
2023-06-08 07:23:41 -04:00
Adam Treat
7e304106cc Fix for windows. 2023-06-07 12:58:51 -04:00
niansa/tuxifan
89910c7ca8
Synced llama.cpp.cmake with upstream (#887) 2023-06-07 09:18:22 -07:00
Richard Guo
c4706d0c14
Replit Model (#713)
* porting over replit code model to gpt4all

* replaced memory with kv_self struct

* continuing debug

* welp it built but lot of sus things

* working model loading and somewhat working generate.. need to format response?

* revert back to semi working version

* finally got rid of weird formatting

* figured out problem is with python bindings - this is good to go for testing

* addressing PR feedback

* output refactor

* fixed prompt reponse collection

* cleanup

* addressing PR comments

* building replit backend with new ggmlver code

* chatllm replit and clean python files

* cleanup

* updated replit to match new llmodel api

* match llmodel api and change size_t to Token

* resolve PR comments

* replit model commit comment
2023-06-06 17:09:00 -04:00
Andriy Mulyar
ef35eb496f Supports downloading officially supported models not hosted on gpt4all R2 2023-06-06 16:21:02 -04:00
Andriy Mulyar
266f13aee9
Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 15:41:30 -04:00
Ettore Di Giacinto
44dc1ade62
Set thread counts after loading model (#836) 2023-06-05 21:35:40 +02:00
Adam Treat
fdffad9efe New release notes 2023-06-05 14:55:59 -04:00