* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix#1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
The original [GPT4All typescript bindings](https://github.com/nomic-ai/gpt4all-ts) are now out of date.
* New bindings created by [jacoobes](https://github.com/jacoobes) and the [nomic ai community](https://home.nomic.ai) :D, for all to use.
* [Documentation](#Documentation)
* New bindings created by [jacoobes](https://github.com/jacoobes), [limez](https://github.com/iimez) and the [nomic ai community](https://home.nomic.ai), for all to use.
* The nodejs api has made strides to mirror the python api. It is not 100% mirrored, but many pieces of the api resemble its python counterpart.
* Everything should work out the box.
* See [API Reference](#api-reference)
### Code (alpha)
### Chat Completion (alpha)
```js
import { createCompletion, loadModel } from '../src/gpt4all.js'
* Can be obtained with visual studio 2022 build tools
* python 3
### Build
### Build (from source)
```sh
git clone https://github.com/nomic-ai/gpt4all.git
@ -117,22 +124,27 @@ yarn test
* Handling prompting and inference of models in a threadsafe, asynchronous way.
#### docs/
### Known Issues
* Autogenerated documentation using the script `yarn docs:build`
* why your model may be spewing bull 💩
* The downloaded model is broken (just reinstall or download from official site)
* That's it so far
### Roadmap
This package is in active development, and breaking changes may happen until the api stabilizes. Here's what's the todo list:
* \[x] prompt models via a threadsafe function in order to have proper non blocking behavior in nodejs
* \[ ] createTokenStream, an async iterator that streams each token emitted from the model. Planning on following this [example](https://github.com/nodejs/node-addon-examples/tree/main/threadsafe-async-iterator)
* \[] proper unit testing (integrate with circle ci)
* \[] publish to npm under alpha tag `gpt4all@alpha`
* \[] have more people test on other platforms (mac tester needed)
* \[ ] ~~createTokenStream, an async iterator that streams each token emitted from the model. Planning on following this [example](https://github.com/nodejs/node-addon-examples/tree/main/threadsafe-async-iterator)~~ May not implement unless someone else can complete
* \[x] proper unit testing (integrate with circle ci)
* \[x] publish to npm under alpha tag `gpt4all@alpha`
* \[x] have more people test on other platforms (mac tester needed)
@deprecated These model names are outdated and this type will not be maintained, please use a string literal instead
##### gptj
@ -367,7 +384,7 @@ By default this will download a model from the official GPT4ALL website, if a mo
* `modelName`**[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)** The name of the model to load.
* `options`**(LoadModelOptions | [undefined](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/undefined))?** (Optional) Additional options for loading the model.
Returns **[Promise](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise)<[LLModel](#llmodel)>** A promise that resolves to an instance of the loaded LLModel.
Returns **[Promise](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise)<(InferenceModel | EmbeddingModel)>** A promise that resolves to an instance of the loaded LLModel.
#### createCompletion
@ -375,25 +392,10 @@ The nodejs equivalent to python binding's chat\_completion
##### Parameters
* `llmodel` **[LLModel](#llmodel)** The language model object.
* `model` **InferenceModel** The language model object.
* `messages`**[Array](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array)<[PromptMessage](#promptmessage)>** The array of messages for the conversation.
* `options`**[CompletionOptions](#completionoptions)** The options for creating the completion.
##### Examples
```javascript
const llmodel = new LLModel(model)
const messages = [
{ role: 'system', message: 'You are a weather forecaster.' },
{ role: 'user', message: 'should i go out today?' } ]
* Throws **[Error](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error)** If the model already exists in the specified location.
@ -635,7 +695,7 @@ Default is process.cwd(), or the current working directory
The original [GPT4All typescript bindings](https://github.com/nomic-ai/gpt4all-ts) are now out of date.
* New bindings created by [jacoobes](https://github.com/jacoobes), [limez](https://github.com/iimez) and the [nomic ai community](https://home.nomic.ai), for all to use.
* [Documentation](#Documentation)
* The nodejs api has made strides to mirror the python api. It is not 100% mirrored, but many pieces of the api resemble its python counterpart.
* Everything should work out the box.
* See [API Reference](#api-reference)
### Chat Completion (alpha)
### Chat Completion
```js
import { createCompletion, loadModel } from '../src/gpt4all.js'
NOTE: This state data is specific to the type of model you have created.
Returns **[number](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number)** the size in bytes of the internal state of the model
##### threadCount
Get the number of threads used for model inference.
The default is the number of physical cores your computer has.
Returns **[number](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number)** The number of threads used for model inference.
##### setThreadCount
Set the number of threads used for model inference.
###### Parameters
* `newNumber`**[number](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number)** The new number of threads.
Returns **void** 
##### raw\_prompt
Prompt the model with a given input and optional parameters.
This is the raw output from model.
Use the prompt function exported for a value
###### Parameters
* `q`**[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)** The prompt input.
* `params`**Partial<[LLModelPromptContext](#llmodelpromptcontext)>** Optional parameters for the prompt context.
Loads a machine learning model with the specified name. The defacto way to create a model.
By default this will download a model from the official GPT4ALL website, if a model is not present at given path.
##### Parameters
* `modelName`**[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)** The name of the model to load.
* `options`**(LoadModelOptions | [undefined](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/undefined))?** (Optional) Additional options for loading the model.
Returns **[Promise](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise)<(InferenceModel | EmbeddingModel)>** A promise that resolves to an instance of the loaded LLModel.
#### createCompletion
The nodejs equivalent to python binding's chat\_completion
##### Parameters
* `model`**InferenceModel** The language model object.
* `messages`**[Array](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array)<[PromptMessage](#promptmessage)>** The array of messages for the conversation.
* `options`**[CompletionOptions](#completionoptions)** The options for creating the completion.
Returns **[CompletionReturn](#completionreturn)** The completion result.
#### createEmbedding
The nodejs moral equivalent to python binding's Embed4All().embed()
meow
##### Parameters
* `model`**EmbeddingModel** The language model object.
* `text`**[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)** text to embed
Returns **[Float32Array](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Float32Array)** The completion result.
* Throws **[Error](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error)** If the model already exists in the specified location.
* Throws **[Error](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Error)** If the model cannot be found at the specified url.
Returns **[DownloadController](#downloadcontroller)** object that allows controlling the download process.
#### DownloadModelOptions
Options for the model download process.
##### modelPath
location to download the model.
Default is process.cwd(), or the current working directory
// otherwise, use the system prompt from the model config and ignore system messages
fullPrompt+=defaultSystemPrompt+"\n\n";
}
if(hasDefaultHeader){
fullPrompt.push(`### Instruction: The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.`);
// These were the default header/footer prompts used for non-chat single turn completions.
// both seem to be working well still with some models, so keeping them here for reference.
// promptHeader: '### Instruction: The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.',