gpt4all/.circleci/continue_config.yml

1184 lines
45 KiB
YAML
Raw Normal View History

version: 2.1
orbs:
win: circleci/windows@5.0
python: circleci/python@1.2
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
node: circleci/node@5.1
parameters:
run-default-workflow:
type: boolean
default: false
run-python-workflow:
type: boolean
default: false
run-chat-workflow:
type: boolean
default: false
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
run-ts-workflow:
type: boolean
default: false
run-csharp-workflow:
type: boolean
default: false
2023-06-18 20:55:32 +00:00
jobs:
default-job:
docker:
- image: circleci/python:3.7
steps:
- run: echo "CircleCI pipeline triggered"
2023-09-19 17:59:43 +00:00
build-offline-chat-installer-macos:
macos:
xcode: 14.0.0
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- macos-qt-cache_v2
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
/Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
fi
- save_cache: # this is the new step to save cache
key: macos-qt-cache_v2
paths:
- ~/Qt
- run:
name: Build
command: |
mkdir build
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.6/bin
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-DCMAKE_GENERATOR:STRING=Ninja \
-DBUILD_UNIVERSAL=ON \
-DMACDEPLOYQT=~/Qt/6.5.1/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-S ../gpt4all-chat \
-B .
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
mkdir upload
cp gpt4all-installer-* upload
- store_artifacts:
path: build/upload
build-offline-chat-installer-linux:
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- linux-qt-cache
- run:
name: Setup Linux and Dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt update && sudo apt install -y libfontconfig1 libfreetype6 libx11-6 libx11-xcb1 libxext6 libxfixes3 libxi6 libxrender1 libxcb1 libxcb-cursor0 libxcb-glx0 libxcb-keysyms1 libxcb-image0 libxcb-shm0 libxcb-icccm4 libxcb-sync1 libxcb-xfixes0 libxcb-shape0 libxcb-randr0 libxcb-render-util0 libxcb-util1 libxcb-xinerama0 libxcb-xkb1 libxkbcommon0 libxkbcommon-x11-0 bison build-essential flex gperf python3 gcc g++ libgl1-mesa-dev libwayland-dev vulkan-sdk patchelf
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
chmod +x qt-unified-linux-x64-4.6.0-online.run
./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
fi
- save_cache: # this is the new step to save cache
key: linux-qt-cache
paths:
- ~/Qt
- run:
name: Build linuxdeployqt
command: |
git clone https://github.com/nomic-ai/linuxdeployqt
cd linuxdeployqt && qmake && sudo make install
- run:
name: Build
command: |
set -eo pipefail
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.6/bin
mkdir build
cd build
mkdir upload
~/Qt/Tools/CMake/bin/cmake -DGPT4ALL_OFFLINE_INSTALLER=ON -DCMAKE_BUILD_TYPE=Release -S ../gpt4all-chat -B .
~/Qt/Tools/CMake/bin/cmake --build . --target all
~/Qt/Tools/CMake/bin/cmake --build . --target install
~/Qt/Tools/CMake/bin/cmake --build . --target package
cp gpt4all-installer-* upload
- store_artifacts:
path: build/upload
build-offline-chat-installer-windows:
machine:
image: 'windows-server-2019-vs2019:2022.08.1'
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- windows-qt-cache
- run:
name: Installing Qt
command: |
if (-not (Test-Path C:\Qt)) {
Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
& .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
}
- save_cache: # this is the new step to save cache
key: windows-qt-cache
paths:
- C:\Qt
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Build
command: |
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.6\bin"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
"-DCMAKE_GENERATOR:STRING=Ninja" `
"-DCMAKE_BUILD_TYPE=Release" `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
"-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
"-DGPT4ALL_OFFLINE_INSTALLER=ON" `
"-S ..\gpt4all-chat" `
"-B ."
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
mkdir upload
copy gpt4all-installer-win64.exe upload
- store_artifacts:
path: build/upload
build-gpt4all-chat-linux:
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
- run:
name: Update Submodules
command: |
2023-06-09 12:40:53 +00:00
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- linux-qt-cache
2023-08-31 14:07:23 +00:00
- run:
name: Setup Linux and Dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt update && sudo apt install -y libfontconfig1 libfreetype6 libx11-6 libx11-xcb1 libxext6 libxfixes3 libxi6 libxrender1 libxcb1 libxcb-cursor0 libxcb-glx0 libxcb-keysyms1 libxcb-image0 libxcb-shm0 libxcb-icccm4 libxcb-sync1 libxcb-xfixes0 libxcb-shape0 libxcb-randr0 libxcb-render-util0 libxcb-util1 libxcb-xinerama0 libxcb-xkb1 libxkbcommon0 libxkbcommon-x11-0 bison build-essential flex gperf python3 gcc g++ libgl1-mesa-dev libwayland-dev vulkan-sdk
2023-08-31 14:10:34 +00:00
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
chmod +x qt-unified-linux-x64-4.6.0-online.run
./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
fi
- save_cache: # this is the new step to save cache
key: linux-qt-cache
paths:
- ~/Qt
- run:
name: Build
command: |
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
mkdir build
cd build
~/Qt/Tools/CMake/bin/cmake -DCMAKE_BUILD_TYPE=Release -S ../gpt4all-chat -B .
~/Qt/Tools/CMake/bin/cmake --build . --target all
build-gpt4all-chat-windows:
machine:
image: 'windows-server-2019-vs2019:2022.08.1'
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
2023-06-09 12:40:53 +00:00
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- windows-qt-cache
- run:
name: Installing Qt
command: |
if (-not (Test-Path C:\Qt)) {
Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
& .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
}
- save_cache: # this is the new step to save cache
key: windows-qt-cache
paths:
- C:\Qt
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Build
command: |
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
"-DCMAKE_GENERATOR:STRING=Ninja" `
"-DCMAKE_BUILD_TYPE=Release" `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
2023-08-31 18:58:27 +00:00
"-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
"-S ..\gpt4all-chat" `
"-B ."
& "C:\Qt\Tools\Ninja\ninja.exe"
build-gpt4all-chat-macos:
macos:
xcode: 14.0.0
steps:
- checkout
- run:
name: Update Submodules
command: |
2023-06-09 12:40:53 +00:00
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- macos-qt-cache_v2
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
/Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
fi
- save_cache: # this is the new step to save cache
key: macos-qt-cache_v2
paths:
- ~/Qt
- run:
name: Build
command: |
mkdir build
cd build
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-DCMAKE_GENERATOR:STRING=Ninja \
2023-09-19 17:59:43 +00:00
-DBUILD_UNIVERSAL=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-S ../gpt4all-chat \
-B .
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
build-ts-docs:
docker:
- image: cimg/base:stable
steps:
- checkout
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- node/install-packages:
pkg-manager: yarn
app-dir: gpt4all-bindings/typescript
override-ci-command: yarn install
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
- run:
name: build docs ts yo
command: |
cd gpt4all-bindings/typescript
yarn docs:build
build-py-docs:
docker:
- image: circleci/python:3.8
steps:
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
- checkout
- run:
name: Install dependencies
command: |
sudo apt-get update
sudo apt-get -y install python3 python3-pip
sudo pip3 install awscli --upgrade
sudo pip3 install mkdocs mkdocs-material mkautodoc 'mkdocstrings[python]'
- run:
name: Make Documentation
command: |
cd gpt4all-bindings/python/
mkdocs build
- run:
name: Deploy Documentation
command: |
cd gpt4all-bindings/python/
aws s3 cp ./site s3://docs.gpt4all.io/ --recursive | cat
- run:
name: Invalidate docs.gpt4all.io cloudfront
command: aws cloudfront create-invalidation --distribution-id E1STQOW63QL2OH --paths "/*"
build-py-linux:
2023-08-31 14:35:16 +00:00
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
2023-08-31 14:49:22 +00:00
- run:
name: Set Python Version
command: pyenv global 3.11.2
- run:
name: Install dependencies
command: |
2023-08-31 14:29:37 +00:00
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt-get update
2023-08-31 14:29:37 +00:00
sudo apt-get install -y cmake build-essential vulkan-sdk
pip install setuptools wheel cmake
- run:
name: Build C library
command: |
git submodule init
git submodule update
cd gpt4all-backend
mkdir build
cd build
cmake ..
cmake --build . --parallel
- run:
name: Build wheel
command: |
cd gpt4all-bindings/python/
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
build-py-macos:
macos:
xcode: "14.2.0"
resource_class: macos.m1.large.gen1
steps:
- checkout
- run:
name: Install dependencies
command: |
brew install cmake
pip install setuptools wheel cmake
- run:
name: Build C library
command: |
git submodule init
git submodule update
cd gpt4all-backend
mkdir build
cd build
cmake .. -DCMAKE_OSX_ARCHITECTURES="x86_64;arm64"
cmake --build . --parallel
- run:
name: Build wheel
command: |
cd gpt4all-bindings/python
python setup.py bdist_wheel --plat-name=macosx_10_15_universal2
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
build-py-windows:
executor:
name: win/default
steps:
- checkout
- run:
name: Install MinGW64
command: choco install -y mingw --force --no-progress
- run:
2023-08-31 14:55:14 +00:00
name: Install VulkanSDK
2023-08-31 15:25:59 +00:00
command: |
2023-08-31 14:29:37 +00:00
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
2023-08-31 15:15:59 +00:00
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
2023-08-31 14:55:14 +00:00
- run:
name: Install dependencies
command:
2023-08-31 14:29:37 +00:00
choco install -y cmake --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Install Python dependencies
command: pip install setuptools wheel cmake
- run:
name: Build C library
command: |
git submodule init
git submodule update
cd gpt4all-backend
mkdir build
cd build
$env:Path += ";C:\ProgramData\mingw64\mingw64\bin"
2023-08-31 18:18:31 +00:00
$env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
cmake -G "MinGW Makefiles" .. -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON -DKOMPUTE_OPT_USE_BUILT_IN_VULKAN_HEADER=OFF
cmake --build . --parallel
- run:
name: Build wheel
# TODO: As part of this task, we need to move mingw64 binaries into package.
# This is terrible and needs a more robust solution eventually.
command: |
cd gpt4all-bindings/python
cd gpt4all
mkdir llmodel_DO_NOT_MODIFY
mkdir llmodel_DO_NOT_MODIFY/build/
cp 'C:\ProgramData\mingw64\mingw64\bin\*dll' 'llmodel_DO_NOT_MODIFY/build/'
cd ..
python setup.py bdist_wheel --plat-name=win_amd64
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
- "*.whl"
store-and-upload-wheels:
docker:
- image: circleci/python:3.8
steps:
- setup_remote_docker
- attach_workspace:
at: /tmp/workspace
- run:
name: Install dependencies
command: |
sudo apt-get update
sudo apt-get install -y cmake build-essential
pip install setuptools wheel twine
- run:
name: Upload Python package
command: |
twine upload /tmp/workspace/*.whl --username __token__ --password $PYPI_CRED
- store_artifacts:
path: /tmp/workspace
2023-06-18 20:55:32 +00:00
build-bindings-backend-linux:
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- run:
name: Install dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
2023-08-31 14:14:08 +00:00
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt-get update
sudo apt-get install -y cmake build-essential vulkan-sdk
2023-06-18 20:55:32 +00:00
- run:
name: Build Libraries
command: |
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
2023-06-20 19:45:47 +00:00
cmake ../..
2023-06-18 20:55:32 +00:00
cmake --build . --parallel --config Release
mkdir ../linux-x64
cp -L *.so ../linux-x64 # otherwise persist_to_workspace seems to mess symlinks
2023-06-18 20:55:32 +00:00
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/linux-x64/*.so
2023-06-18 20:55:32 +00:00
build-bindings-backend-macos:
macos:
xcode: "14.0.0"
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- run:
name: Install dependencies
command: |
brew install cmake
- run:
name: Build Libraries
command: |
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
2023-06-20 19:45:47 +00:00
cmake ../.. -DCMAKE_OSX_ARCHITECTURES="x86_64;arm64"
2023-06-18 21:10:12 +00:00
cmake --build . --parallel --config Release
mkdir ../osx-x64
cp -L *.dylib ../osx-x64
2023-07-09 16:02:34 +00:00
cp ../../llama.cpp-mainline/*.metal ../osx-x64
ls ../osx-x64
2023-06-18 20:55:32 +00:00
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/osx-x64/*.dylib
2023-07-09 16:02:34 +00:00
- runtimes/osx-x64/*.metal
2023-06-18 20:55:32 +00:00
build-bindings-backend-windows:
executor:
name: win/default
size: large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- run:
name: Install MinGW64
command: choco install -y mingw --force --no-progress
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
2023-06-18 20:55:32 +00:00
- run:
name: Install dependencies
command: |
choco install -y cmake --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Build Libraries
command: |
$MinGWBin = "C:\ProgramData\mingw64\mingw64\bin"
2023-06-20 18:55:01 +00:00
$Env:Path += ";$MinGwBin"
2023-06-18 20:55:32 +00:00
$Env:Path += ";C:\Program Files\CMake\bin"
$Env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
2023-06-18 20:55:32 +00:00
cd gpt4all-backend
mkdir runtimes/win-x64
cd runtimes/win-x64
2023-08-31 18:58:27 +00:00
cmake -G "MinGW Makefiles" -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON ../..
2023-06-18 20:55:32 +00:00
cmake --build . --parallel --config Release
2023-06-20 18:55:01 +00:00
cp "$MinGWBin\libgcc*.dll" .
cp "$MinGWBin\libstdc++*.dll" .
cp "$MinGWBin\libwinpthread*.dll" .
2023-06-18 21:47:22 +00:00
cp bin/*.dll .
2023-06-18 20:55:32 +00:00
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/win-x64/*.dll
2023-06-18 20:55:32 +00:00
build-bindings-backend-windows-msvc:
machine:
image: 'windows-server-2022-gui:2023.03.1'
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
2023-06-18 21:09:11 +00:00
- run:
name: Install dependencies
command: |
choco install -y cmake --installargs 'ADD_CMAKE_TO_PATH=System'
2023-06-18 20:55:32 +00:00
- run:
name: Build Libraries
command: |
2023-06-18 21:09:11 +00:00
$Env:Path += ";C:\Program Files\CMake\bin"
$Env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
2023-06-18 20:55:32 +00:00
cd gpt4all-backend
mkdir runtimes/win-x64_msvc
cd runtimes/win-x64_msvc
2023-08-31 18:58:27 +00:00
cmake -G "Visual Studio 17 2022" -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON -A X64 ../..
2023-06-18 20:55:32 +00:00
cmake --build . --parallel --config Release
2023-06-18 21:32:04 +00:00
cp bin/Release/*.dll .
2023-06-18 20:55:32 +00:00
- persist_to_workspace:
root: gpt4all-backend
paths:
- runtimes/win-x64_msvc/*.dll
2023-06-18 21:32:04 +00:00
build-csharp-linux:
docker:
2023-06-16 19:13:53 +00:00
- image: mcr.microsoft.com/dotnet/sdk:7.0-jammy # Ubuntu 22.04
2023-07-13 17:55:22 +00:00
steps:
- checkout
- attach_workspace:
at: /tmp/workspace
- run:
name: "Prepare Native Libs"
command: |
cd gpt4all-bindings/csharp
mkdir -p runtimes/linux-x64/native
cp /tmp/workspace/runtimes/linux-x64/*.so runtimes/linux-x64/native/
ls -R runtimes
- restore_cache:
keys:
- gpt4all-csharp-nuget-packages-nix
- run:
name: "Install project dependencies"
command: |
cd gpt4all-bindings/csharp
dotnet restore Gpt4All
- save_cache:
paths:
- ~/.nuget/packages
key: gpt4all-csharp-nuget-packages-nix
- run:
name: Build C# Project
command: |
cd gpt4all-bindings/csharp
dotnet build Gpt4All --configuration Release --nologo
- run:
name: "Run C# Tests"
command: |
cd gpt4all-bindings/csharp
dotnet test Gpt4All.Tests -v n -c Release --filter "SKIP_ON_CI!=True" --logger "trx"
- run:
name: Test results
command: |
cd gpt4all-bindings/csharp/Gpt4All.Tests
dotnet tool install -g trx2junit
export PATH="$PATH:$HOME/.dotnet/tools"
trx2junit TestResults/*.trx
- store_test_results:
path: gpt4all-bindings/csharp/Gpt4All.Tests/TestResults
2023-06-16 12:06:23 +00:00
build-csharp-windows:
executor:
2023-06-16 13:41:51 +00:00
name: win/default
size: large
2023-06-16 13:04:51 +00:00
shell: powershell.exe -ExecutionPolicy Bypass
2023-07-13 17:57:22 +00:00
steps:
- checkout
- restore_cache:
keys:
- gpt4all-csharp-nuget-packages-win
- attach_workspace:
at: C:\Users\circleci\workspace
- run:
name: "Prepare Native Libs"
command: |
cd gpt4all-bindings/csharp
mkdir -p runtimes\win-x64\native
cp C:\Users\circleci\workspace\runtimes\win-x64\*.dll runtimes\win-x64\native\
ls -R runtimes
- run:
name: "Install project dependencies"
command: |
cd gpt4all-bindings/csharp
dotnet.exe restore Gpt4All
- save_cache:
paths:
- C:\Users\circleci\.nuget\packages
key: gpt4all-csharp-nuget-packages-win
- run:
name: Build C# Project
command: |
cd gpt4all-bindings/csharp
dotnet.exe build Gpt4All --configuration Release --nologo
- run:
name: "Run C# Tests"
command: |
cd gpt4all-bindings/csharp
dotnet.exe test Gpt4All.Tests -v n -c Release --filter "SKIP_ON_CI!=True" --logger "trx"
- run:
name: Test results
command: |
cd gpt4all-bindings/csharp/Gpt4All.Tests
dotnet tool install -g trx2junit
$Env:Path += ";$Env:USERPROFILE\.dotnet\tools"
trx2junit TestResults/*.trx
- store_test_results:
path: gpt4all-bindings/csharp/Gpt4All.Tests/TestResults
2023-06-16 12:06:23 +00:00
2023-06-16 17:10:35 +00:00
build-csharp-macos:
macos:
2023-06-16 17:17:09 +00:00
xcode: "14.0.0"
2023-06-16 17:10:35 +00:00
steps:
2023-07-13 17:52:39 +00:00
- checkout
- restore_cache:
keys:
- gpt4all-csharp-nuget-packages-nix
- run:
name: Install dependencies
command: |
brew install --cask dotnet-sdk
- attach_workspace:
at: /tmp/workspace
- run:
name: "Prepare Native Libs"
command: |
cd gpt4all-bindings/csharp
mkdir -p runtimes/osx/native
cp /tmp/workspace/runtimes/osx-x64/*.dylib runtimes/osx/native/
cp /tmp/workspace/runtimes/osx-x64/*.metal runtimes/osx/native/
ls -R runtimes
- run:
name: "Install project dependencies"
command: |
cd gpt4all-bindings/csharp
dotnet restore Gpt4All
- save_cache:
paths:
- ~/.nuget/packages
key: gpt4all-csharp-nuget-packages-nix
- run:
name: Build C# Project
command: |
cd gpt4all-bindings/csharp
dotnet build Gpt4All --configuration Release --nologo
- run:
name: "Run C# Tests"
command: |
cd gpt4all-bindings/csharp
dotnet test Gpt4All.Tests -v n -c Release --filter "SKIP_ON_CI!=True" --logger "trx"
- run:
name: Test results
command: |
cd gpt4all-bindings/csharp/Gpt4All.Tests
dotnet tool install -g trx2junit
export PATH="$PATH:$HOME/.dotnet/tools"
trx2junit TestResults/*.trx
- store_test_results:
path: gpt4all-bindings/csharp/Gpt4All.Tests/TestResults
2023-06-16 17:10:35 +00:00
2023-06-16 12:55:14 +00:00
store-and-upload-nupkgs:
docker:
- image: mcr.microsoft.com/dotnet/sdk:6.0-jammy # Ubuntu 22.04
steps:
- attach_workspace:
at: /tmp/workspace
2023-06-25 20:52:34 +00:00
- checkout
- restore_cache:
keys:
- gpt4all-csharp-nuget-packages-nix
- run:
name: NuGet Pack
command: |
cd gpt4all-bindings/csharp
mkdir -p runtimes/linux-x64/native
cp /tmp/workspace/runtimes/linux-x64/*.so runtimes/linux-x64/native/
mkdir -p runtimes/win-x64/native
cp /tmp/workspace/runtimes/win-x64/*.dll runtimes/win-x64/native/
mkdir -p runtimes/osx/native
cp /tmp/workspace/runtimes/osx-x64/*.dylib runtimes/osx/native/
2023-07-09 17:46:58 +00:00
cp /tmp/workspace/runtimes/osx-x64/*.metal runtimes/osx/native/
2023-06-25 20:52:34 +00:00
dotnet pack ./Gpt4All/Gpt4All.csproj -p:IncludeSymbols=true -p:SymbolPackageFormat=snupkg -c Release
dotnet nuget push ./Gpt4All/bin/Release/Gpt4All.*.nupkg -s $NUGET_URL -k $NUGET_TOKEN --skip-duplicate
- store_artifacts:
path: gpt4all-bindings/csharp/Gpt4All/bin/Release
2023-06-16 12:55:14 +00:00
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
build-nodejs-linux:
docker:
- image: cimg/base:stable
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
vulkan support for typescript bindings, gguf support (#1390) * adding some native methods to cpp wrapper * gpu seems to work * typings and add availibleGpus method * fix spelling * fix syntax * more * normalize methods to conform to py * remove extra dynamic linker deps when building with vulkan * bump python version (library linking fix) * Don't link against libvulkan. * vulkan python bindings on windows fixes * Bring the vulkan backend to the GUI. * When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU. * Show the device we're currently using. * Fix up the name and formatting. * init at most one vulkan device, submodule update fixes issues w/ multiple of the same gpu * Update the submodule. * Add version 2.4.15 and bump the version number. * Fix a bug where we're not properly falling back to CPU. * Sync to a newer version of llama.cpp with bugfix for vulkan. * Report the actual device we're using. * Only show GPU when we're actually using it. * Bump to new llama with new bugfix. * Release notes for v2.4.16 and bump the version. * Fallback to CPU more robustly. * Release notes for v2.4.17 and bump the version. * Bump the Python version to python-v1.0.12 to restrict the quants that vulkan recognizes. * Link against ggml in bin so we can get the available devices without loading a model. * Send actual and requested device info for those who have opt-in. * Actually bump the version. * Release notes for v2.4.18 and bump the version. * Fix for crashes on systems where vulkan is not installed properly. * Release notes for v2.4.19 and bump the version. * fix typings and vulkan build works on win * Add flatpak manifest * Remove unnecessary stuffs from manifest * Update to 2.4.19 * appdata: update software description * Latest rebase on llama.cpp with gguf support. * macos build fixes * llamamodel: metal supports all quantization types now * gpt4all.py: GGUF * pyllmodel: print specific error message * backend: port BERT to GGUF * backend: port MPT to GGUF * backend: port Replit to GGUF * backend: use gguf branch of llama.cpp-mainline * backend: use llamamodel.cpp for StarCoder * conversion scripts: cleanup * convert scripts: load model as late as possible * convert_mpt_hf_to_gguf.py: better tokenizer decoding * backend: use llamamodel.cpp for Falcon * convert scripts: make them directly executable * fix references to removed model types * modellist: fix the system prompt * backend: port GPT-J to GGUF * gpt-j: update inference to match latest llama.cpp insights - Use F16 KV cache - Store transposed V in the cache - Avoid unnecessary Q copy Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78 * chatllm: grammar fix * convert scripts: use bytes_to_unicode from transformers * convert scripts: make gptj script executable * convert scripts: add feed-forward length for better compatiblilty This GGUF key is used by all llama.cpp models with upstream support. * gptj: remove unused variables * Refactor for subgroups on mat * vec kernel. * Add q6_k kernels for vulkan. * python binding: print debug message to stderr * Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf. * Bump to the latest fixes for vulkan in llama. * llamamodel: fix static vector in LLamaModel::endTokens * Switch to new models2.json for new gguf release and bump our version to 2.5.0. * Bump to latest llama/gguf branch. * chat: report reason for fallback to CPU * chat: make sure to clear fallback reason on success * more accurate fallback descriptions * differentiate between init failure and unsupported models * backend: do not use Vulkan with non-LLaMA models * Add q8_0 kernels to kompute shaders and bump to latest llama/gguf. * backend: fix build with Visual Studio generator Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This is needed because Visual Studio is a multi-configuration generator, so we do not know what the build type will be until `cmake --build` is called. Fixes #1470 * remove old llama.cpp submodules * Reorder and refresh our models2.json. * rebase on newer llama.cpp * python/embed4all: use gguf model, allow passing kwargs/overriding model * Add starcoder, rift and sbert to our models2.json. * Push a new version number for llmodel backend now that it is based on gguf. * fix stray comma in models2.json Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * Speculative fix for build on mac. * chat: clearer CPU fallback messages * Fix crasher with an empty string for prompt template. * Update the language here to avoid misunderstanding. * added EM German Mistral Model * make codespell happy * issue template: remove "Related Components" section * cmake: install the GPT-J plugin (#1487) * Do not delete saved chats if we fail to serialize properly. * Restore state from text if necessary. * Another codespell attempted fix. * llmodel: do not call magic_match unless build variant is correct (#1488) * chatllm: do not write uninitialized data to stream (#1486) * mat*mat for q4_0, q8_0 * do not process prompts on gpu yet * python: support Path in GPT4All.__init__ (#1462) * llmodel: print an error if the CPU does not support AVX (#1499) * python bindings should be quiet by default * disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is nonempty * make verbose flag for retrieve_model default false (but also be overridable via gpt4all constructor) should be able to run a basic test: ```python import gpt4all model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf') print(model.generate('def fib(n):')) ``` and see no non-model output when successful * python: always check status code of HTTP responses (#1502) * Always save chats to disk, but save them as text by default. This also changes the UI behavior to always open a 'New Chat' and setting it as current instead of setting a restored chat as current. This improves usability by not requiring the user to wait if they want to immediately start chatting. * Update README.md Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> * fix embed4all filename https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692 Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * Improves Java API signatures maintaining back compatibility * python: replace deprecated pkg_resources with importlib (#1505) * Updated chat wishlist (#1351) * q6k, q4_1 mat*mat * update mini-orca 3b to gguf2, license Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * convert scripts: fix AutoConfig typo (#1512) * publish config https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig (#1375) merge into my branch * fix appendBin * fix gpu not initializing first * sync up * progress, still wip on destructor * some detection work * untested dispose method * add js side of dispose * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/gpt4all.d.ts Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/gpt4all.js Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/util.js Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix tests * fix circleci for nodejs * bump version --------- Signed-off-by: Aaron Miller <apage43@ninjawhale.com> Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Akarshan Biswas <akarshan.biswas@gmail.com> Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com> Co-authored-by: Jan Philipp Harries <jpdus@users.noreply.github.com> Co-authored-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> Co-authored-by: Alex Soto <asotobu@gmail.com> Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-11-01 19:38:58 +00:00
override-ci-command: yarn install
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
command: |
mkdir -p gpt4all-backend/prebuilds/linux-x64
mkdir -p gpt4all-backend/runtimes/linux-x64
cp /tmp/gpt4all-backend/runtimes/linux-x64/*-*.so gpt4all-backend/runtimes/linux-x64
cp gpt4all-bindings/typescript/prebuilds/linux-x64/*.node gpt4all-backend/prebuilds/linux-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/linux-x64/*.node
- runtimes/linux-x64/*-*.so
build-nodejs-macos:
macos:
xcode: "14.0.0"
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
vulkan support for typescript bindings, gguf support (#1390) * adding some native methods to cpp wrapper * gpu seems to work * typings and add availibleGpus method * fix spelling * fix syntax * more * normalize methods to conform to py * remove extra dynamic linker deps when building with vulkan * bump python version (library linking fix) * Don't link against libvulkan. * vulkan python bindings on windows fixes * Bring the vulkan backend to the GUI. * When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU. * Show the device we're currently using. * Fix up the name and formatting. * init at most one vulkan device, submodule update fixes issues w/ multiple of the same gpu * Update the submodule. * Add version 2.4.15 and bump the version number. * Fix a bug where we're not properly falling back to CPU. * Sync to a newer version of llama.cpp with bugfix for vulkan. * Report the actual device we're using. * Only show GPU when we're actually using it. * Bump to new llama with new bugfix. * Release notes for v2.4.16 and bump the version. * Fallback to CPU more robustly. * Release notes for v2.4.17 and bump the version. * Bump the Python version to python-v1.0.12 to restrict the quants that vulkan recognizes. * Link against ggml in bin so we can get the available devices without loading a model. * Send actual and requested device info for those who have opt-in. * Actually bump the version. * Release notes for v2.4.18 and bump the version. * Fix for crashes on systems where vulkan is not installed properly. * Release notes for v2.4.19 and bump the version. * fix typings and vulkan build works on win * Add flatpak manifest * Remove unnecessary stuffs from manifest * Update to 2.4.19 * appdata: update software description * Latest rebase on llama.cpp with gguf support. * macos build fixes * llamamodel: metal supports all quantization types now * gpt4all.py: GGUF * pyllmodel: print specific error message * backend: port BERT to GGUF * backend: port MPT to GGUF * backend: port Replit to GGUF * backend: use gguf branch of llama.cpp-mainline * backend: use llamamodel.cpp for StarCoder * conversion scripts: cleanup * convert scripts: load model as late as possible * convert_mpt_hf_to_gguf.py: better tokenizer decoding * backend: use llamamodel.cpp for Falcon * convert scripts: make them directly executable * fix references to removed model types * modellist: fix the system prompt * backend: port GPT-J to GGUF * gpt-j: update inference to match latest llama.cpp insights - Use F16 KV cache - Store transposed V in the cache - Avoid unnecessary Q copy Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78 * chatllm: grammar fix * convert scripts: use bytes_to_unicode from transformers * convert scripts: make gptj script executable * convert scripts: add feed-forward length for better compatiblilty This GGUF key is used by all llama.cpp models with upstream support. * gptj: remove unused variables * Refactor for subgroups on mat * vec kernel. * Add q6_k kernels for vulkan. * python binding: print debug message to stderr * Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf. * Bump to the latest fixes for vulkan in llama. * llamamodel: fix static vector in LLamaModel::endTokens * Switch to new models2.json for new gguf release and bump our version to 2.5.0. * Bump to latest llama/gguf branch. * chat: report reason for fallback to CPU * chat: make sure to clear fallback reason on success * more accurate fallback descriptions * differentiate between init failure and unsupported models * backend: do not use Vulkan with non-LLaMA models * Add q8_0 kernels to kompute shaders and bump to latest llama/gguf. * backend: fix build with Visual Studio generator Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This is needed because Visual Studio is a multi-configuration generator, so we do not know what the build type will be until `cmake --build` is called. Fixes #1470 * remove old llama.cpp submodules * Reorder and refresh our models2.json. * rebase on newer llama.cpp * python/embed4all: use gguf model, allow passing kwargs/overriding model * Add starcoder, rift and sbert to our models2.json. * Push a new version number for llmodel backend now that it is based on gguf. * fix stray comma in models2.json Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * Speculative fix for build on mac. * chat: clearer CPU fallback messages * Fix crasher with an empty string for prompt template. * Update the language here to avoid misunderstanding. * added EM German Mistral Model * make codespell happy * issue template: remove "Related Components" section * cmake: install the GPT-J plugin (#1487) * Do not delete saved chats if we fail to serialize properly. * Restore state from text if necessary. * Another codespell attempted fix. * llmodel: do not call magic_match unless build variant is correct (#1488) * chatllm: do not write uninitialized data to stream (#1486) * mat*mat for q4_0, q8_0 * do not process prompts on gpu yet * python: support Path in GPT4All.__init__ (#1462) * llmodel: print an error if the CPU does not support AVX (#1499) * python bindings should be quiet by default * disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is nonempty * make verbose flag for retrieve_model default false (but also be overridable via gpt4all constructor) should be able to run a basic test: ```python import gpt4all model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf') print(model.generate('def fib(n):')) ``` and see no non-model output when successful * python: always check status code of HTTP responses (#1502) * Always save chats to disk, but save them as text by default. This also changes the UI behavior to always open a 'New Chat' and setting it as current instead of setting a restored chat as current. This improves usability by not requiring the user to wait if they want to immediately start chatting. * Update README.md Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> * fix embed4all filename https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692 Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * Improves Java API signatures maintaining back compatibility * python: replace deprecated pkg_resources with importlib (#1505) * Updated chat wishlist (#1351) * q6k, q4_1 mat*mat * update mini-orca 3b to gguf2, license Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * convert scripts: fix AutoConfig typo (#1512) * publish config https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig (#1375) merge into my branch * fix appendBin * fix gpu not initializing first * sync up * progress, still wip on destructor * some detection work * untested dispose method * add js side of dispose * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/gpt4all.d.ts Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/gpt4all.js Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/util.js Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix tests * fix circleci for nodejs * bump version --------- Signed-off-by: Aaron Miller <apage43@ninjawhale.com> Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Akarshan Biswas <akarshan.biswas@gmail.com> Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com> Co-authored-by: Jan Philipp Harries <jpdus@users.noreply.github.com> Co-authored-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> Co-authored-by: Alex Soto <asotobu@gmail.com> Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-11-01 19:38:58 +00:00
override-ci-command: yarn install
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
name: "Persisting all necessary things to workspace"
command: |
mkdir -p gpt4all-backend/prebuilds/darwin-x64
mkdir -p gpt4all-backend/runtimes/darwin-x64
cp /tmp/gpt4all-backend/runtimes/osx-x64/*-*.* gpt4all-backend/runtimes/darwin-x64
cp gpt4all-bindings/typescript/prebuilds/darwin-x64/*.node gpt4all-backend/prebuilds/darwin-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/darwin-x64/*.node
- runtimes/darwin-x64/*-*.*
build-nodejs-windows:
executor:
name: win/default
size: large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- attach_workspace:
at: /tmp/gpt4all-backend
- run: choco install wget -y
- run:
command: wget https://nodejs.org/dist/v18.16.0/node-v18.16.0-x86.msi -P C:\Users\circleci\Downloads\
shell: cmd.exe
- run: MsiExec.exe /i C:\Users\circleci\Downloads\node-v18.16.0-x86.msi /qn
- run:
command: |
Start-Process powershell -verb runAs -Args "-start GeneralProfile"
nvm install 18.16.0
nvm use 18.16.0
- run: node --version
- run:
command: |
npm install -g yarn
cd gpt4all-bindings/typescript
yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- run:
command: |
mkdir -p gpt4all-backend/prebuilds/win32-x64
mkdir -p gpt4all-backend/runtimes/win32-x64
cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll gpt4all-backend/runtimes/win32-x64
cp gpt4all-bindings/typescript/prebuilds/win32-x64/*.node gpt4all-backend/prebuilds/win32-x64
- persist_to_workspace:
root: gpt4all-backend
paths:
- prebuilds/win32-x64/*.node
- runtimes/win32-x64/*-*.dll
prepare-npm-pkg:
docker:
- image: cimg/base:stable
steps:
- attach_workspace:
at: /tmp/gpt4all-backend
- checkout
- node/install:
install-yarn: true
node-version: "18.16"
- run: node --version
- run:
command: |
cd gpt4all-bindings/typescript
# excluding llmodel. nodejs bindings dont need llmodel.dll
mkdir -p runtimes/win32-x64/native
mkdir -p prebuilds/win32-x64/
cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll runtimes/win32-x64/native/
cp /tmp/gpt4all-backend/prebuilds/win32-x64/*.node prebuilds/win32-x64/
mkdir -p runtimes/linux-x64/native
mkdir -p prebuilds/linux-x64/
cp /tmp/gpt4all-backend/runtimes/linux-x64/*-*.so runtimes/linux-x64/native/
cp /tmp/gpt4all-backend/prebuilds/linux-x64/*.node prebuilds/linux-x64/
mkdir -p runtimes/darwin-x64/native
mkdir -p prebuilds/darwin-x64/
cp /tmp/gpt4all-backend/runtimes/darwin-x64/*-*.* runtimes/darwin-x64/native/
cp /tmp/gpt4all-backend/prebuilds/darwin-x64/*.node prebuilds/darwin-x64/
# Fallback build if user is not on above prebuilds
mv -f binding.ci.gyp binding.gyp
mkdir gpt4all-backend
cd ../../gpt4all-backend
mv llmodel.h llmodel.cpp llmodel_c.cpp llmodel_c.h sysinfo.h dlhandle.h ../gpt4all-bindings/typescript/gpt4all-backend/
# Test install
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
yarn run test
- run:
command: |
cd gpt4all-bindings/typescript
npm set //registry.npmjs.org/:_authToken=$NPM_TOKEN
vulkan support for typescript bindings, gguf support (#1390) * adding some native methods to cpp wrapper * gpu seems to work * typings and add availibleGpus method * fix spelling * fix syntax * more * normalize methods to conform to py * remove extra dynamic linker deps when building with vulkan * bump python version (library linking fix) * Don't link against libvulkan. * vulkan python bindings on windows fixes * Bring the vulkan backend to the GUI. * When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU. * Show the device we're currently using. * Fix up the name and formatting. * init at most one vulkan device, submodule update fixes issues w/ multiple of the same gpu * Update the submodule. * Add version 2.4.15 and bump the version number. * Fix a bug where we're not properly falling back to CPU. * Sync to a newer version of llama.cpp with bugfix for vulkan. * Report the actual device we're using. * Only show GPU when we're actually using it. * Bump to new llama with new bugfix. * Release notes for v2.4.16 and bump the version. * Fallback to CPU more robustly. * Release notes for v2.4.17 and bump the version. * Bump the Python version to python-v1.0.12 to restrict the quants that vulkan recognizes. * Link against ggml in bin so we can get the available devices without loading a model. * Send actual and requested device info for those who have opt-in. * Actually bump the version. * Release notes for v2.4.18 and bump the version. * Fix for crashes on systems where vulkan is not installed properly. * Release notes for v2.4.19 and bump the version. * fix typings and vulkan build works on win * Add flatpak manifest * Remove unnecessary stuffs from manifest * Update to 2.4.19 * appdata: update software description * Latest rebase on llama.cpp with gguf support. * macos build fixes * llamamodel: metal supports all quantization types now * gpt4all.py: GGUF * pyllmodel: print specific error message * backend: port BERT to GGUF * backend: port MPT to GGUF * backend: port Replit to GGUF * backend: use gguf branch of llama.cpp-mainline * backend: use llamamodel.cpp for StarCoder * conversion scripts: cleanup * convert scripts: load model as late as possible * convert_mpt_hf_to_gguf.py: better tokenizer decoding * backend: use llamamodel.cpp for Falcon * convert scripts: make them directly executable * fix references to removed model types * modellist: fix the system prompt * backend: port GPT-J to GGUF * gpt-j: update inference to match latest llama.cpp insights - Use F16 KV cache - Store transposed V in the cache - Avoid unnecessary Q copy Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78 * chatllm: grammar fix * convert scripts: use bytes_to_unicode from transformers * convert scripts: make gptj script executable * convert scripts: add feed-forward length for better compatiblilty This GGUF key is used by all llama.cpp models with upstream support. * gptj: remove unused variables * Refactor for subgroups on mat * vec kernel. * Add q6_k kernels for vulkan. * python binding: print debug message to stderr * Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf. * Bump to the latest fixes for vulkan in llama. * llamamodel: fix static vector in LLamaModel::endTokens * Switch to new models2.json for new gguf release and bump our version to 2.5.0. * Bump to latest llama/gguf branch. * chat: report reason for fallback to CPU * chat: make sure to clear fallback reason on success * more accurate fallback descriptions * differentiate between init failure and unsupported models * backend: do not use Vulkan with non-LLaMA models * Add q8_0 kernels to kompute shaders and bump to latest llama/gguf. * backend: fix build with Visual Studio generator Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This is needed because Visual Studio is a multi-configuration generator, so we do not know what the build type will be until `cmake --build` is called. Fixes #1470 * remove old llama.cpp submodules * Reorder and refresh our models2.json. * rebase on newer llama.cpp * python/embed4all: use gguf model, allow passing kwargs/overriding model * Add starcoder, rift and sbert to our models2.json. * Push a new version number for llmodel backend now that it is based on gguf. * fix stray comma in models2.json Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * Speculative fix for build on mac. * chat: clearer CPU fallback messages * Fix crasher with an empty string for prompt template. * Update the language here to avoid misunderstanding. * added EM German Mistral Model * make codespell happy * issue template: remove "Related Components" section * cmake: install the GPT-J plugin (#1487) * Do not delete saved chats if we fail to serialize properly. * Restore state from text if necessary. * Another codespell attempted fix. * llmodel: do not call magic_match unless build variant is correct (#1488) * chatllm: do not write uninitialized data to stream (#1486) * mat*mat for q4_0, q8_0 * do not process prompts on gpu yet * python: support Path in GPT4All.__init__ (#1462) * llmodel: print an error if the CPU does not support AVX (#1499) * python bindings should be quiet by default * disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is nonempty * make verbose flag for retrieve_model default false (but also be overridable via gpt4all constructor) should be able to run a basic test: ```python import gpt4all model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf') print(model.generate('def fib(n):')) ``` and see no non-model output when successful * python: always check status code of HTTP responses (#1502) * Always save chats to disk, but save them as text by default. This also changes the UI behavior to always open a 'New Chat' and setting it as current instead of setting a restored chat as current. This improves usability by not requiring the user to wait if they want to immediately start chatting. * Update README.md Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> * fix embed4all filename https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692 Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * Improves Java API signatures maintaining back compatibility * python: replace deprecated pkg_resources with importlib (#1505) * Updated chat wishlist (#1351) * q6k, q4_1 mat*mat * update mini-orca 3b to gguf2, license Signed-off-by: Aaron Miller <apage43@ninjawhale.com> * convert scripts: fix AutoConfig typo (#1512) * publish config https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig (#1375) merge into my branch * fix appendBin * fix gpu not initializing first * sync up * progress, still wip on destructor * some detection work * untested dispose method * add js side of dispose * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/index.cc Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/gpt4all.d.ts Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/gpt4all.js Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update gpt4all-bindings/typescript/src/util.js Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix tests * fix circleci for nodejs * bump version --------- Signed-off-by: Aaron Miller <apage43@ninjawhale.com> Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Akarshan Biswas <akarshan.biswas@gmail.com> Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com> Co-authored-by: Jan Philipp Harries <jpdus@users.noreply.github.com> Co-authored-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com> Co-authored-by: Alex Soto <asotobu@gmail.com> Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-11-01 19:38:58 +00:00
npm publish
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
workflows:
version: 2
default:
when: << pipeline.parameters.run-default-workflow >>
jobs:
- default-job
2023-09-19 17:59:43 +00:00
build-chat-offline-installers:
when: << pipeline.parameters.run-chat-workflow >>
jobs:
- hold:
type: approval
- build-offline-chat-installer-macos:
requires:
- hold
- build-offline-chat-installer-windows:
requires:
- hold
- build-offline-chat-installer-linux:
requires:
- hold
build-and-test-gpt4all-chat:
when: << pipeline.parameters.run-chat-workflow >>
jobs:
- hold:
type: approval
- build-gpt4all-chat-linux:
requires:
- hold
- build-gpt4all-chat-windows:
requires:
- hold
- build-gpt4all-chat-macos:
requires:
- hold
deploy-docs:
when: << pipeline.parameters.run-python-workflow >>
jobs:
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
- build-ts-docs:
filters:
branches:
only:
- main
- build-py-docs:
filters:
branches:
only:
- main
2023-06-09 15:03:15 +00:00
build-py-deploy:
2023-06-13 03:10:53 +00:00
when: << pipeline.parameters.run-python-workflow >>
2023-06-09 15:03:15 +00:00
jobs:
2023-06-09 17:12:22 +00:00
- pypi-hold:
type: approval
2023-06-09 15:03:15 +00:00
- hold:
type: approval
- build-py-linux:
filters:
branches:
only:
2023-06-18 21:03:13 +00:00
requires:
- hold
2023-06-09 15:03:15 +00:00
- build-py-macos:
filters:
branches:
only:
requires:
- hold
- build-py-windows:
filters:
branches:
only:
requires:
- hold
- store-and-upload-wheels:
filters:
branches:
only:
requires:
2023-06-09 17:12:22 +00:00
- pypi-hold
2023-06-09 15:03:15 +00:00
- build-py-windows
- build-py-linux
- build-py-macos
2023-06-21 19:10:12 +00:00
build-bindings:
when:
or:
- << pipeline.parameters.run-python-workflow >>
- << pipeline.parameters.run-csharp-workflow >>
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
- << pipeline.parameters.run-ts-workflow >>
2023-06-18 20:55:32 +00:00
jobs:
- hold:
2023-06-18 21:00:39 +00:00
type: approval
2023-06-21 19:10:12 +00:00
- nuget-hold:
type: approval
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
- npm-hold:
type: approval
2023-06-21 19:10:12 +00:00
- build-bindings-backend-linux:
2023-06-18 21:01:34 +00:00
filters:
2023-06-18 20:55:32 +00:00
branches:
only:
2023-06-18 21:11:42 +00:00
requires:
- hold
2023-06-21 19:10:12 +00:00
- build-bindings-backend-macos:
2023-06-18 21:01:34 +00:00
filters:
2023-06-18 20:55:32 +00:00
branches:
only:
2023-06-18 21:11:42 +00:00
requires:
- hold
2023-06-21 19:10:12 +00:00
- build-bindings-backend-windows:
2023-06-18 21:01:34 +00:00
filters:
2023-06-18 20:55:32 +00:00
branches:
only:
2023-06-18 21:11:42 +00:00
requires:
- hold
2023-06-21 19:10:12 +00:00
- build-bindings-backend-windows-msvc:
2023-06-18 21:01:34 +00:00
filters:
2023-06-18 20:55:32 +00:00
branches:
only:
2023-06-18 21:11:42 +00:00
requires:
2023-06-21 19:10:12 +00:00
- hold
typescript: fix final bugs and polishing, circle ci documentation (#960) * fix: esm and cjs compatibility Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update prebuild.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix gpt4all.js Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui! * version bump * polish up spec and build scripts * lock file refresh * fix: proper resource closing and error handling * check make sure libPath not null * add msvc build script and update readme requirements * python workflows in circleci * dummy python change * no need for main * second hold for pypi deploy * let me deploy pls * bring back when condition * Typo, ignore list (#967) Fix typo in javadoc, Add word to ignore list for codespellrc --------- Co-authored-by: felix <felix@zaslavskiy.net> * llmodel: change tokenToString to not use string_view (#968) fixes a definite use-after-free and likely avoids some other potential ones - std::string will convert to a std::string_view automatically but as soon as the std::string in question goes out of scope it is already freed and the string_view is pointing at freed memory - this is *mostly* fine if its returning a reference to the tokenizer's internal vocab table but it's, imo, too easy to return a reference to a dynamically constructed string with this as replit is doing (and unfortunately needs to do to convert the internal whitespace replacement symbol back to a space) * Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763) * Initial Library Loader * Load library as part of Model factory * Dynamically search and find the dlls * Update tests to use locally built runtimes * Fix dylib loading, add macos runtime support for sample/tests * Bypass automatic loading by default. * Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile * Switch Loading again * Update build scripts for mac/linux * Update bindings to support newest breaking changes * Fix build * Use llmodel for Windows * Actually, it does need to be libllmodel * Name * Remove TFMs, bypass loading by default * Fix script * Delete mac script --------- Co-authored-by: Tim Miller <innerlogic4321@ghmail.com> * bump llama.cpp mainline to latest (#964) * fix prompt context so it's preserved in class * update setup.py * metal replit (#931) metal+replit makes replit work with Metal and removes its use of `mem_per_token` in favor of fixed size scratch buffers (closer to llama.cpp) * update documentation scripts and generation to include readme.md * update readme and documentation for source * begin tests, import jest, fix listModels export * fix typo * chore: update spec * fix: finally, reduced potential of empty string * chore: add stub for createTokenSream * refactor: protecting resources properly * add basic jest tests * update * update readme * refactor: namespace the res variable * circleci integration to automatically build docs * add starter docs * typo * more circle ci typo * forgot to add nodejs circle ci orb * fix circle ci * feat: @iimez verify download and fix prebuild script * fix: oops, option name wrong * fix: gpt4all utils not emitting docs * chore: fix up scripts * fix: update docs and typings for md5 sum * fix: macos compilation * some refactoring * Update index.cc Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * update readme and enable exceptions on mac * circle ci progress * basic embedding with sbert (not tested & cpp side only) * fix circle ci * fix circle ci * update circle ci script * bruh * fix again * fix * fixed required workflows * fix ci * fix pwd * fix pwd * update ci * revert * fix * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * update circle ci script * prevent rebuild * revmove noop * Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * Update binding.gyp Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> * fix fs not found * remove cpp 20 standard * fix warnings, safer way to calculate arrsize * readd build backend * basic embeddings and yarn test" * fix circle ci Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Update continue_config.yml Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> fix macos paths update readme and roadmap split up spec update readme check for url in modelsjson update docs and inline stuff update yarn configuration and readme update readme readd npm publish script add exceptions bruh one space broke the yaml codespell oops forgot to add runtimes folder bump version try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images add fallback for unknown architectures attached to wrong workspace hopefuly fix moving everything under backend to persist should work now * Update README.md Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> --------- Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com> Co-authored-by: Adam Treat <treat.adam@gmail.com> Co-authored-by: Richard Guo <richardg7890@gmail.com> Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com> Co-authored-by: felix <felix@zaslavskiy.net> Co-authored-by: Aaron Miller <apage43@ninjawhale.com> Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com> Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 15:46:40 +00:00
# NodeJs Jobs
- prepare-npm-pkg:
filters:
branches:
only:
requires:
- npm-hold
- build-nodejs-linux
- build-nodejs-windows
- build-nodejs-macos
- build-nodejs-linux:
filters:
branches:
only:
requires:
- npm-hold
- build-bindings-backend-linux
- build-nodejs-windows:
filters:
branches:
only:
requires:
- npm-hold
- build-bindings-backend-windows-msvc
- build-nodejs-macos:
filters:
branches:
only:
requires:
- npm-hold
- build-bindings-backend-macos
# CSharp Jobs
2023-06-21 19:10:12 +00:00
- build-csharp-linux:
filters:
branches:
2023-06-15 19:13:14 +00:00
only:
requires:
- nuget-hold
2023-06-21 19:10:12 +00:00
- build-bindings-backend-linux
- build-csharp-windows:
2023-06-16 17:10:35 +00:00
filters:
branches:
only:
requires:
- nuget-hold
- build-bindings-backend-windows
2023-06-21 19:10:12 +00:00
- build-csharp-macos:
2023-06-16 12:55:14 +00:00
filters:
branches:
only:
requires:
- nuget-hold
2023-06-21 19:10:12 +00:00
- build-bindings-backend-macos
- store-and-upload-nupkgs:
2023-06-18 21:32:04 +00:00
filters:
branches:
only:
requires:
2023-06-21 19:10:12 +00:00
- nuget-hold
- build-csharp-windows
- build-csharp-linux
- build-csharp-macos