mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-02 15:40:13 +00:00
Updated settings.en.mdx to include all common LLM settings
This commit is contained in:
parent
0b0a8f96e4
commit
6772f9bdad
@ -6,13 +6,13 @@ When working with prompts, you interact with the LLM via an API or directly. You
|
|||||||
|
|
||||||
**Top_p** - Similarly, with `top_p`, a sampling technique with temperature called nucleus sampling, you can control how deterministic the model is at generating a response. If you are looking for exact and factual answers keep this low. If you are looking for more diverse responses, increase to a higher value.
|
**Top_p** - Similarly, with `top_p`, a sampling technique with temperature called nucleus sampling, you can control how deterministic the model is at generating a response. If you are looking for exact and factual answers keep this low. If you are looking for more diverse responses, increase to a higher value.
|
||||||
|
|
||||||
The general recommendation is to alter temperature *or* top_p, not both.
|
The general recommendation is to alter temperature or top_p, not both.
|
||||||
|
|
||||||
**Max Length** - You can manage the number of tokens the model generates by adjusting the 'max length'. Specifying a max length helps you prevent long or irrelevant responses and control costs.
|
**Max Length** - You can manage the number of tokens the model generates by adjusting the 'max length'. Specifying a max length helps you prevent long or irrelevant responses and control costs.
|
||||||
|
|
||||||
**Stop Sequences** - A 'stop sequence' is a string that stops the model from generating tokens. Specifying stop sequences is another way to control the length and structure of the model's response. For example, you can tell the model to generate lists that have no more than 10 items by adding "11" as a stop sequence.
|
**Stop Sequences** - A 'stop sequence' is a string that stops the model from generating tokens. Specifying stop sequences is another way to control the length and structure of the model's response. For example, you can tell the model to generate lists that have no more than 10 items by adding "11" as a stop sequence.
|
||||||
|
|
||||||
**Frequency Penalty** - The 'frequency penalty' applies a penalty on the next token proportional to how many times that token already appeared in the response and prompt. The higher the frequency penalty, the less likely a word will appear again. This setting reduces the repitition of words in the model's response by giving tokens that appear more a higher penalty.
|
**Frequency Penalty** - The 'frequency penalty' applies a penalty on the next token proportional to how many times that token already appeared in the response and prompt. The higher the frequency penalty, the less likely a word will appear again. This setting reduces the repetition of words in the model's response by giving tokens that appear more a higher penalty.
|
||||||
|
|
||||||
**Presence Penalty** - The 'presence penalty' also applies a penalty on repeated tokens but, unlike the frequency penalty, the penalty is the same for all repeated tokens. A token that appears twice and a token that appears 10 times are penalized the same. This setting prevents the model from repeating phrases too often in its response. If you want the model to generate diverse or creative text, you might want to use a higher presence penalty. Or, if you need the model to stay focused, try using a lower presence penalty.
|
**Presence Penalty** - The 'presence penalty' also applies a penalty on repeated tokens but, unlike the frequency penalty, the penalty is the same for all repeated tokens. A token that appears twice and a token that appears 10 times are penalized the same. This setting prevents the model from repeating phrases too often in its response. If you want the model to generate diverse or creative text, you might want to use a higher presence penalty. Or, if you need the model to stay focused, try using a lower presence penalty.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user