From 2db43570ae7955f501e07fa432cc76f8dda08118 Mon Sep 17 00:00:00 2001 From: Andriy Mulyar Date: Wed, 29 Mar 2023 17:13:55 -0400 Subject: [PATCH] Update README.md --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index 770a2806..68046118 100644 --- a/README.md +++ b/README.md @@ -28,6 +28,14 @@ Clone this repository down and place the quantized model in the `chat` directory To compile for custom hardware, see our fork of the [Alpaca C++](https://github.com/zanussbaum/gpt4all.cpp) repo. +----------- + +[Secret Unfiltered Checkpoint](https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-unfiltered-quantized.bin) + +This model had all refusal to answer responses removed from training. Try it with: +- `cd chat;./gpt4all-lora-quantized-OSX-m1 -m gpt4all-lora-unfiltered-quantized.bin` + +----------- Note: the full model on GPU (16GB of RAM required) performs much better in our qualitative evaluations. # Reproducibility