gpt4all/configs/eval/generate_baseline.yaml

18 lines
352 B
YAML
Raw Normal View History

2023-03-28 18:47:38 +00:00
# model/tokenizer
2023-03-29 04:00:57 +00:00
model_name: # update with llama model name
tokenizer_name: # update with llama model name
2023-03-28 18:47:38 +00:00
lora: true
lora_path: "tloen/alpaca-lora-7b"
max_new_tokens: 512
temperature: 0.001
prompt: |
#this code prints a string reversed
my_string = "hello how are you"
print(len(my_string))
My code above does not work. Can you help me?