mirror of https://github.com/nomic-ai/gpt4all
non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just scale all the logits to infinity and give bad outputpull/913/head
parent
6624d7b2dd
commit
9afbaee94e
Loading…
Reference in New Issue