Signed-off-by: Tare Ebelo <75279482+TareHimself@users.noreply.github.com>
Signed-off-by: jacob <jacoobes@sern.dev>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: jacob <jacoobes@sern.dev>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
Also dynamically limit the GPU layers and context length fields to the maximum supported by the model.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
The only reason to use as_file is to support copying a file from a
frozen package. We don't currently support this anyway, and as_file
isn't supported until Python 3.9, so get rid of it.
Fixes#1605
* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
nonempty
* make verbose flag for retrieve_model default false (but also be
overridable via gpt4all constructor)
should be able to run a basic test:
```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```
and see no non-model output when successful
Running `git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git`
returns `Permission denied (publickey)` as shown below:
```
git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git
Cloning into gpt4all...
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
```
This change replaces `git@github.com:nomic-ai/gpt4all.git` with
`https://github.com/nomic-ai/gpt4all.git` which runs without permission issues.
resolvesnomic-ai/gpt4all#8, resolvesnomic-ai/gpt4all#49
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix#1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
- minor oversight: there are now six supported architectures
- LLAMA -> LLaMA (for v1)
- note about Llama 2 and link to license
- limit some of the paragraphs to 150 chars
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>