diff --git a/examples/How_to_use_moderation.ipynb b/examples/How_to_use_moderation.ipynb index 13373d9..39b8d26 100644 --- a/examples/How_to_use_moderation.ipynb +++ b/examples/How_to_use_moderation.ipynb @@ -201,7 +201,7 @@ "metadata": {}, "source": [ "#### Setting moderation thresholds\n", - "Our output moderation will assess the LLM's response and block anything scoring a 0.4 or higher in any category. Setting this threshold is a common area for optimization - we recommend building an evaluation set and grading the results using a confusion matrix to set the right tolerance for your moderation. The trade-off here is generally:\n", + "OpenAI has selected thresholds for moderation categories that balance precision and recall for our use cases, but your use case or tolerance for moderation may be different. Setting this threshold is a common area for optimization - we recommend building an evaluation set and grading the results using a confusion matrix to set the right tolerance for your moderation. The trade-off here is generally:\n", "\n", "- More false positives leads to a fractured user experience, where customers get annoyed and the assistant seems less helpful.\n", "- More false negatives can cause lasting harm to your business, as people get the assistant to answer inappropriate questions, or provide inappropriate responses.\n",