From 71d29386e8d02d7ee2a01c9211b310791a4ea973 Mon Sep 17 00:00:00 2001 From: Tao Li Date: Fri, 14 Apr 2023 23:09:48 -0700 Subject: [PATCH] Update settings.en.mdx Avoid future tense. Avoid using first person (like we). Instead use second person directly to address the reader. --- pages/introduction/settings.en.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pages/introduction/settings.en.mdx b/pages/introduction/settings.en.mdx index b84e42f..408987a 100644 --- a/pages/introduction/settings.en.mdx +++ b/pages/introduction/settings.en.mdx @@ -1,11 +1,11 @@ # LLM Settings -When working with prompts, you will be interacting with the LLM via an API or directly. You can configure a few parameters to get different results for your prompts. +When working with prompts, you interact with the LLM via an API or directly. You can configure a few parameters to get different results for your prompts. -**Temperature** - In short, the lower the `temperature` the more deterministic the results in the sense that the highest probable next token is always picked. Increasing temperature could lead to more randomness encouraging more diverse or creative outputs. We are essentially increasing the weights of the other possible tokens. In terms of application, we might want to use a lower temperature value for tasks like fact-based QA to encourage more factual and concise responses. For poem generation or other creative tasks, it might be beneficial to increase the temperature value. +**Temperature** - In short, the lower the `temperature`, the more deterministic the results in the sense that the highest probable next token is always picked. Increasing temperature could lead to more randomness, which encourages more diverse or creative outputs. You are essentially increasing the weights of the other possible tokens. In terms of application, you might want to use a lower temperature value for tasks like fact-based QA to encourage more factual and concise responses. For poem generation or other creative tasks, it might be beneficial to increase the temperature value. **Top_p** - Similarly, with `top_p`, a sampling technique with temperature called nucleus sampling, you can control how deterministic the model is at generating a response. If you are looking for exact and factual answers keep this low. If you are looking for more diverse responses, increase to a higher value. The general recommendation is to alter one, not both. -Before starting with some basic examples, keep in mind that your results may vary depending on the version of LLM you are using. \ No newline at end of file +Before starting with some basic examples, keep in mind that your results may vary depending on the version of LLM you use.