mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-17 15:29:46 +00:00
499 lines
24 KiB
Plaintext
499 lines
24 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# How to use the moderation API\n",
|
|
"\n",
|
|
"**Note:** This guide is designed to complement our Guardrails Cookbook by providing a more focused look at moderation techniques. While there is some overlap in content and structure, this cookbook delves deeper into the nuances of tailoring moderation criteria to specific needs, offering a more granular level of control. If you're interested in a broader overview of content safety measures, including guardrails and moderation, we recommend starting with the [Guardrails Cookbook](https://cookbook.openai.com/examples/how_to_use_guardrails). Together, these resources offer a comprehensive understanding of how to effectively manage and moderate content within your applications.\n",
|
|
"\n",
|
|
"Moderation, much like guardrails in the physical world, serves as a preventative measure to ensure that your application remains within the bounds of acceptable and safe content. Moderation techniques are incredibly versatile and can be applied to a wide array of scenarios where LLMs might encounter issues. This notebook is designed to offer straightforward examples that can be adapted to suit your specific needs, while also discussing the considerations and trade-offs involved in deciding whether to implement moderation and how to go about it. This notebook will use our [Moderation API](https://platform.openai.com/docs/guides/moderation/overview), a tool you can use to check whether text is potentially harmful.\n",
|
|
"\n",
|
|
"This notebook will concentrate on:\n",
|
|
"\n",
|
|
"- **Input Moderation:** Identifying and flagging inappropriate or harmful content before it is processed by your LLM.\n",
|
|
"- **Output Moderation:** Reviewing and validating the content generated by your LLM before it reaches the end user.\n",
|
|
"- **Custom Moderation:** Tailoring moderation criteria and rules to suit the specific needs and context of your application, ensuring a personalized and effective content control mechanism."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from openai import OpenAI\n",
|
|
"client = OpenAI()\n",
|
|
"\n",
|
|
"GPT_MODEL = 'gpt-3.5-turbo'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 1. Input moderation\n",
|
|
"Input Moderation focuses on preventing harmful or inappropriate content from reaching the LLM, with common applications including:\n",
|
|
"- **Content Filtering:** Prevent the spread of harmful content such as hate speech, harassment, explicit material, and misinformation on social media, forums, and content creation platforms.\n",
|
|
"- **Community Standards Enforcement:** Ensure that user interactions, such as comments, forum posts, and chat messages, adhere to the community guidelines and standards of online platforms, including educational environments, gaming communities, or dating apps.\n",
|
|
"- **Spam and Fraud Prevention:** Filter out spam, fraudulent content, and misleading information in online forums, comment sections, e-commerce platforms, and customer reviews.\n",
|
|
"\n",
|
|
"These measures act as preventive controls, operating before or alongside the LLM to alter your application's behavior if specific criteria are met."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Embrace async\n",
|
|
"A common design to minimize latency is to send your moderations asynchronously along with your main LLM call. If your moderation gets triggered you send back a placeholder response, otherwise send back the LLM response. This pattern can also be found in our [Guardrails Cookbook](https://cookbook.openai.com/examples/how_to_use_guardrails). It's important to note that while the async mode is effective in minimizing latency, it can also lead to unnecessary costs. Specifically, you could avoid completion costs if the content is flagged before processing. Therefore, it's crucial to balance the benefits of reduced latency with the potential for increased expenses when using async mode.\n",
|
|
"\n",
|
|
"We'll use this approach, creating an execute_chat_with_moderation function that will run our LLM's get_chat_response and the check_expression moderation function in parallel, and return the LLM response only if the moderation returns False (not triggered).\n",
|
|
"\n",
|
|
"#### Workflow:\n",
|
|
"We will create a workflow that incorporates the Moderation API to check user input for any potentially unsafe content before it is sent to the Language Model (LLM). This ensures that only appropriate content is processed further in the application.\n",
|
|
"\n",
|
|
"1. Receive an input from the user\n",
|
|
"2. Use the Moderation API to analyze the input for any problematic content. \n",
|
|
"3. **Conditional Processing:** \n",
|
|
"- If the input is flagged by the moderation check, handle it accordingly (e.g., reject the input, ask the user to rephrase, etc.).\n",
|
|
"- If the input is not flagged, pass it to the LLM for further processing.\n",
|
|
"\n",
|
|
"We will demonstrate this workflow with two example prompts."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"system_prompt = \"You are a helpful assistant.\"\n",
|
|
"\n",
|
|
"bad_request = \"I want to hurt them. How can i do this?\"\n",
|
|
"good_request = \"I would kill for a cup of coffe. Where can I get one nearby?\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import asyncio\n",
|
|
"\n",
|
|
"async def check_moderation_flag(expression):\n",
|
|
" moderation_response = client.moderations.create(input=expression)\n",
|
|
" flagged = moderation_response.results[0].flagged\n",
|
|
" return flagged\n",
|
|
" \n",
|
|
"async def get_chat_response(user_request):\n",
|
|
" print(\"Getting LLM response\")\n",
|
|
" messages = [\n",
|
|
" {\"role\": \"system\", \"content\": system_prompt},\n",
|
|
" {\"role\": \"user\", \"content\": user_request},\n",
|
|
" ]\n",
|
|
" response = client.chat.completions.create(\n",
|
|
" model=GPT_MODEL, messages=messages, temperature=0.5\n",
|
|
" )\n",
|
|
" print(\"Got LLM response\")\n",
|
|
" return response.choices[0].message.content\n",
|
|
"\n",
|
|
"\n",
|
|
"async def execute_chat_with_input_moderation(user_request):\n",
|
|
" # Create tasks for moderation and chat response\n",
|
|
" moderation_task = asyncio.create_task(check_moderation_flag(user_request))\n",
|
|
" chat_task = asyncio.create_task(get_chat_response(user_request))\n",
|
|
"\n",
|
|
" while True:\n",
|
|
" # Wait for either the moderation task or chat task to complete\n",
|
|
" done, _ = await asyncio.wait(\n",
|
|
" [moderation_task, chat_task], return_when=asyncio.FIRST_COMPLETED\n",
|
|
" )\n",
|
|
"\n",
|
|
" # If moderation task is not completed, wait and continue to the next iteration\n",
|
|
" if moderation_task not in done:\n",
|
|
" await asyncio.sleep(0.1)\n",
|
|
" continue\n",
|
|
"\n",
|
|
" # If moderation is triggered, cancel the chat task and return a message\n",
|
|
" if moderation_task.result() == True:\n",
|
|
" chat_task.cancel()\n",
|
|
" print(\"Moderation triggered\")\n",
|
|
" return \"We're sorry, but your input has been flagged as inappropriate. Please rephrase your input and try again.\"\n",
|
|
"\n",
|
|
" # If chat task is completed, return the chat response\n",
|
|
" if chat_task in done:\n",
|
|
" return chat_task.result()\n",
|
|
"\n",
|
|
" # If neither task is completed, sleep for a bit before checking again\n",
|
|
" await asyncio.sleep(0.1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Getting LLM response\n",
|
|
"Got LLM response\n",
|
|
"I can help you with that! To find a nearby coffee shop, you can use a mapping app on your phone or search online for coffee shops in your current location. Alternatively, you can ask locals or check for any cafes or coffee shops in the vicinity. Enjoy your coffee!\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Call the main function with the good request - this should go through\n",
|
|
"good_response = await execute_chat_with_input_moderation(good_request)\n",
|
|
"print(good_response)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Getting LLM response\n",
|
|
"Got LLM response\n",
|
|
"Moderation triggered\n",
|
|
"We're sorry, but your input has been flagged as inappropriate. Please rephrase your input and try again.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Call the main function with the bad request - this should get blocked\n",
|
|
"bad_response = await execute_chat_with_input_moderation(bad_request)\n",
|
|
"print(bad_response)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Looks like our moderation worked - the first question was allowed through, but the second was blocked for inapropriate content. Now we'll extend this concept to moderate the response we get from the LLM as well."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 2. Output moderation\n",
|
|
"\n",
|
|
"Output moderation is crucial for controlling the content generated by the Language Model (LLM). While LLMs should not output illegal or harmful content, it can be helpful to put additional guardrails in place to further ensure that the content remains within acceptable and safe boundaries, enhancing the overall security and reliability of the application. Common types of output moderation include:\n",
|
|
"\n",
|
|
"- **Content Quality Assurance:** Ensure that generated content, such as articles, product descriptions, and educational materials, is accurate, informative, and free from inappropriate information.\n",
|
|
"- **Community Standards Compliance:** Maintain a respectful and safe environment in online forums, discussion boards, and gaming communities by filtering out hate speech, harassment, and other harmful content.\n",
|
|
"- **User Experience Enhancement:** Improve the user experience in chatbots and automated services by providing responses that are polite, relevant, and free from any unsuitable language or content.\n",
|
|
"\n",
|
|
"In all these scenarios, output moderation plays a crucial role in maintaining the quality and integrity of the content generated by language models, ensuring that it meets the standards and expectations of the platform and its users."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Setting moderation thresholds\n",
|
|
"OpenAI has selected thresholds for moderation categories that balance precision and recall for our use cases, but your use case or tolerance for moderation may be different. Setting this threshold is a common area for optimization - we recommend building an evaluation set and grading the results using a confusion matrix to set the right tolerance for your moderation. The trade-off here is generally:\n",
|
|
"\n",
|
|
"- More false positives leads to a fractured user experience, where customers get annoyed and the assistant seems less helpful.\n",
|
|
"- More false negatives can cause lasting harm to your business, as people get the assistant to answer inappropriate questions, or provide inappropriate responses.\n",
|
|
"\n",
|
|
"For example, on a platform dedicated to creative writing, the moderation threshold for certain sensitive topics might be set higher to allow for greater creative freedom while still providing a safety net to catch content that is clearly beyond the bounds of acceptable expression. The trade-off is that some content that might be considered inappropriate in other contexts is allowed, but this is deemed acceptable given the platform's purpose and audience expectations.\n",
|
|
"\n",
|
|
"#### Workflow:\n",
|
|
"We will create a workflow that incorporates the Moderation API to check the LLM response for any potentially unsafe content before it is sent to the Language Model (LLM). This ensures that only appropriate content is displayed to the user.\n",
|
|
"\n",
|
|
"1. Receive an input from the user\n",
|
|
"2. Send prompt to LLM and generate a response\n",
|
|
"3. Use the Moderation API to analyze the LLM's response for any problematic content. \n",
|
|
"3. **Conditional Processing:** \n",
|
|
"- If the response is flagged by the moderation check, handle it accordingly (e.g., reject the response, show a placeholder message, etc.).\n",
|
|
"- If the response is not flagged, display it to the user.\n",
|
|
"\n",
|
|
"We will demonstrate this workflow with the previous two example prompts."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"async def execute_all_moderations(user_request):\n",
|
|
" # Create tasks for moderation and chat response\n",
|
|
" input_moderation_task = asyncio.create_task(check_moderation_flag(user_request))\n",
|
|
" chat_task = asyncio.create_task(get_chat_response(user_request))\n",
|
|
"\n",
|
|
" while True:\n",
|
|
" done, _ = await asyncio.wait(\n",
|
|
" [input_moderation_task, chat_task], return_when=asyncio.FIRST_COMPLETED\n",
|
|
" )\n",
|
|
"\n",
|
|
" # If input moderation is not completed, wait and continue to the next iteration\n",
|
|
" if input_moderation_task not in done:\n",
|
|
" await asyncio.sleep(0.1)\n",
|
|
" continue\n",
|
|
"\n",
|
|
" # If input moderation is triggered, cancel chat task and return a message\n",
|
|
" if input_moderation_task.result() == True:\n",
|
|
" chat_task.cancel()\n",
|
|
" print(\"Input moderation triggered\")\n",
|
|
" return \"We're sorry, but your input has been flagged as inappropriate. Please rephrase your input and try again.\"\n",
|
|
"\n",
|
|
" # Check if chat task is completed\n",
|
|
" if chat_task in done:\n",
|
|
" chat_response = chat_task.result()\n",
|
|
" output_moderation_response = await check_moderation_flag(chat_response)\n",
|
|
"\n",
|
|
" # Check if output moderation is triggered\n",
|
|
" if output_moderation_response == True:\n",
|
|
" print(\"Moderation flagged for LLM response.\")\n",
|
|
" return \"Sorry, we're not permitted to give this answer. I can help you with any general queries you might have.\"\n",
|
|
" \n",
|
|
" print('Passed moderation')\n",
|
|
" return chat_response\n",
|
|
"\n",
|
|
" # If neither task is completed, sleep for a bit before checking again\n",
|
|
" await asyncio.sleep(0.1)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Adding a request that should pass our input guardrail but not pass our output guardrail.\n",
|
|
"interesting_request = \"Describe a scene from a violent movie in detail.\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"I would kill for a cup of coffe. Where can I get one nearby?\n",
|
|
"Getting LLM response\n",
|
|
"Got LLM response\n",
|
|
"Passed moderation\n",
|
|
"I can help you with that! To find a nearby coffee shop, you can use a mapping app on your phone or search online for coffee shops in your area. Additionally, you can ask locals or nearby businesses for recommendations. Enjoy your coffee!\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"I want to hurt them. How can i do this?\n",
|
|
"Getting LLM response\n",
|
|
"Got LLM response\n",
|
|
"Input moderation triggered\n",
|
|
"We're sorry, but your input has been flagged as inappropriate. Please rephrase your input and try again.\n",
|
|
"\n",
|
|
"\n",
|
|
"\n",
|
|
"Describe a scene from a violent movie in detail.\n",
|
|
"Getting LLM response\n",
|
|
"Got LLM response\n",
|
|
"Passed moderation\n",
|
|
"I'm sorry, but I can't provide detailed descriptions of violent scenes from movies. If you have any other questions or need information on a different topic, feel free to ask!\n",
|
|
"\n",
|
|
"\n",
|
|
"\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"tests = [good_request, bad_request, interesting_request]\n",
|
|
"\n",
|
|
"for test in tests:\n",
|
|
" print(test)\n",
|
|
" result = await execute_all_moderations(test)\n",
|
|
" print(result)\n",
|
|
" print('\\n\\n')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"As observed, the first two examples behaved as expected. However, the \"interesting example\" did not perform as anticipated. This discrepancy arises because all OpenAI models employ output moderation for their content generations. Nevertheless, as a user, you have the ability to adjust the moderation settings and screen for content that may not be explicitly harmful. In the following section, titled \"Custom Moderation\", we will explore how to modify these moderation settings to suit your specific needs."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 3. Custom moderation\n",
|
|
"\n",
|
|
"**Note: This section does not use the Moderation API.**\n",
|
|
"\n",
|
|
"Custom moderation provides a tailored approach to content filtering, allowing for precise adherence to specific community standards or topics, making it an ideal choice for niche platforms or specialized content. Unlike the general categories covered by the moderation API, custom moderation offers a higher degree of control and requires more effort to set up and maintain. It shares the common goal of ensuring content safety with guardrails, focusing on setting criteria to filter and manage content, whether it's user input or output from language models. Custom moderation can be an alternative or an extension to the moderation API, that uses our completions endpoint. For those interested in exploring guardrails further, a cookbook with examples and guidance is available [here](https://cookbook.openai.com/examples/how_to_use_guardrails).\n",
|
|
"\n",
|
|
"#### Workflow\n",
|
|
"We will implement a small example to showcase custom moderation. We will use an extra call to gpt-4 to assess messages and decide if they should be moderated or not."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def custom_moderation(content, parameters):\n",
|
|
" # Define the prompt for GPT-4\n",
|
|
" prompt = f\"\"\"Please assess the following content for any inappropriate material. You should base your assessment on the given parameters.\n",
|
|
" Your answer should be in json format with the following fields: \n",
|
|
" - flagged: a boolean indicating whether the content is flagged for any of the categories in the parameters\n",
|
|
" - reason: a string explaining the reason for the flag, if any\n",
|
|
" - parameters: a dictionary of the parameters used for the assessment and their values\n",
|
|
" Parameters: {parameters}\\n\\nContent:\\n{content}\\n\\nAssessment:\"\"\"\n",
|
|
" \n",
|
|
" # Call GPT-4 with the prompt\n",
|
|
" response = client.chat.completions.create(\n",
|
|
" model=\"gpt-4-turbo-preview\",\n",
|
|
" response_format={ \"type\": \"json_object\" },\n",
|
|
" messages=[\n",
|
|
" {\"role\": \"system\", \"content\": \"You are a content moderation assistant.\"},\n",
|
|
" {\"role\": \"user\", \"content\": prompt}\n",
|
|
" ]\n",
|
|
" )\n",
|
|
" \n",
|
|
" # Extract the assessment from the response\n",
|
|
" assessment = response.choices[0].message.content\n",
|
|
" \n",
|
|
" return assessment"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Example content and parameters\n",
|
|
"parameters = \"political content, misinformation\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{\n",
|
|
" \"flagged\": false,\n",
|
|
" \"reason\": \"The content does not contain political content or misinformation. The phrase 'I would kill for a cup of coffee' is a common expression indicating a strong desire for coffee and not a literal intent.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"political content\": false,\n",
|
|
" \"misinformation\": false\n",
|
|
" }\n",
|
|
"}\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Use the custom moderation function for the good example\n",
|
|
"moderation_result = custom_moderation(good_request, parameters)\n",
|
|
"print(moderation_result)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{\n",
|
|
" \"flagged\": true,\n",
|
|
" \"reason\": \"The content contains language that implies intent to cause harm, which violates guidelines concerning violent or harmful behavior, even though the specific parameters were political content and misinformation. This assessment is based on a general guideline to prevent harm or promotion of violence.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"political content\": false,\n",
|
|
" \"misinformation\": false\n",
|
|
" }\n",
|
|
"}\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Use the custom moderation function for the bad example\n",
|
|
"moderation_result = custom_moderation(bad_request, parameters)\n",
|
|
"print(moderation_result)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{\n",
|
|
" \"flagged\": true,\n",
|
|
" \"reason\": \"The content suggests political content by discussing the government and hints at misinformation by suggesting the government is hiding the truth about the pandemic without providing evidence.\",\n",
|
|
" \"parameters\": {\n",
|
|
" \"political content\": true,\n",
|
|
" \"misinformation\": true\n",
|
|
" }\n",
|
|
"}\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Use the custom moderation function for a custom example\n",
|
|
"custom_request = \"I want to talk about how the government is hiding the truth about the pandemic.\"\n",
|
|
"moderation_result = custom_moderation(custom_request, parameters)\n",
|
|
"print(moderation_result)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Conclusion\n",
|
|
"\n",
|
|
"In conclusion, this notebook has explored the essential role of moderation in applications powered by language models (LLMs). We've delved into both input and output moderation strategies, highlighting their significance in maintaining a safe and respectful environment for user interactions. Through practical examples, we've demonstrated the use of OpenAI's Moderation API to preemptively filter user inputs and to scrutinize LLM-generated responses for appropriateness. The implementation of these moderation techniques is crucial for upholding the integrity of your application and ensuring a positive experience for your users.\n",
|
|
"\n",
|
|
"As you further develop your application, consider the ongoing refinement of your moderation strategies through custom moderations. This may involve tailoring moderation criteria to your specific use case or integrating a combination of machine learning models and rule-based systems for a more nuanced analysis of content. Striking the right balance between allowing freedom of expression and ensuring content safety is key to creating an inclusive and constructive space for all users. By continuously monitoring and adjusting your moderation approach, you can adapt to evolving content standards and user expectations, ensuring the long-term success and relevance of your LLM-powered application.\n"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.12.1"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|