You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
gpt4all/gpt4all-bindings/typescript
..
docs
scripts
spec
src
test
.gitignore
.npmignore
README.md
binding.gyp
index.cc
index.h
package.json
prompt.cc
prompt.h
yarn.lock

README.md

Javascript Bindings

The original GPT4All typescript bindings are now out of date.

Code (alpha)

import { LLModel, createCompletion, DEFAULT_DIRECTORY, DEFAULT_LIBRARIES_DIRECTORY } from '../src/gpt4all.js'

const ll = new LLModel({
    model_name: 'ggml-vicuna-7b-1.1-q4_2.bin',
    model_path: './', 
    library_path: DEFAULT_LIBRARIES_DIRECTORY
});

const response = await createCompletion(ll, [
    { role : 'system', content: 'You are meant to be annoying and unhelpful.'  },
    { role : 'user', content: 'What is 1 + 1?'  } 
]);

API

  • The nodejs api has made strides to mirror the python api. It is not 100% mirrored, but many pieces of the api resemble its python counterpart.
  • docs

Build Instructions

  • As of 05/21/2023, Tested on windows (MSVC). (somehow got it to work on MSVC 🤯)
    • binding.gyp is compile config
  • Tested on Ubuntu. Everything seems to work fine
  • MingW works as well to build the gpt4all-backend. HOWEVER, this package works only with MSVC built dlls.

Requirements

Build

git clone https://github.com/nomic-ai/gpt4all.git
cd gpt4all-bindings/typescript
  • The below shell commands assume the current working directory is typescript.

  • To Build and Rebuild:

yarn
  • llama.cpp git submodule for gpt4all can be possibly absent. If this is the case, make sure to run in llama.cpp parent directory
git submodule update --init --depth 1 --recursive

AS OF NEW BACKEND to build the backend,

yarn build:backend

This will build platform-dependent dynamic libraries, and will be located in runtimes/(platform)/native The only current way to use them is to put them in the current working directory of your application. That is, WHEREVER YOU RUN YOUR NODE APPLICATION

  • llama-xxxx.dll is required.
  • According to whatever model you are using, you'll need to select the proper model loader.
    • For example, if you running an Mosaic MPT model, you will need to select the mpt-(buildvariant).(dynamiclibrary)

Test

yarn test

Source Overview

src/

  • Extra functions to help aid devex
  • Typings for the native node addon
  • the javascript interface

test/

  • simple unit testings for some functions exported.
  • more advanced ai testing is not handled

spec/

  • Average look and feel of the api
  • Should work assuming a model and libraries are installed locally in working directory

index.cc

  • The bridge between nodejs and c. Where the bindings are.

prompt.cc

  • Handling prompting and inference of models in a threadsafe, asynchronous way.

docs/

  • Autogenerated documentation using the script yarn docs:build

Roadmap

This package is in active development, and breaking changes may happen until the api stabilizes. Here's what's the todo list:

  • prompt models via a threadsafe function in order to have proper non blocking behavior in nodejs
  • createTokenStream, an async iterator that streams each token emitted from the model. Planning on following this example
  • proper unit testing (integrate with circle ci)
  • publish to npm under alpha tag gpt4all@alpha
  • have more people test on other platforms (mac tester needed)
  • switch to new pluggable backend