mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-11 13:11:02 +00:00
Change from display(Markdown) to print() (#1199)
This commit is contained in:
parent
0d665efa47
commit
cf15304c39
@ -55,7 +55,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -69,14 +69,18 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Assistant: Sure! The sum of 2 + 2 is 4. If you have any more questions or need further assistance, feel free to ask!\n"
|
||||
"Assistant: Of course! \n",
|
||||
"\n",
|
||||
"\\[ 2 + 2 = 4 \\]\n",
|
||||
"\n",
|
||||
"If you have any other questions, feel free to ask!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -106,7 +110,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@ -139,41 +143,31 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"To find the area of the triangle, we can use Heron's formula. Heron's formula states that the area of a triangle with sides of length \\(a\\), \\(b\\), and \\(c\\) is:\n",
|
||||
"\n",
|
||||
"\\[ \\text{Area} = \\sqrt{s(s-a)(s-b)(s-c)} \\]\n",
|
||||
"\n",
|
||||
"where \\(s\\) is the semi-perimeter of the triangle:\n",
|
||||
"\n",
|
||||
"\\[ s = \\frac{a + b + c}{2} \\]\n",
|
||||
"\n",
|
||||
"For the given triangle, the side lengths are \\(a = 5\\), \\(b = 6\\), and \\(c = 9\\).\n",
|
||||
"\n",
|
||||
"First, calculate the semi-perimeter \\(s\\):\n",
|
||||
"\n",
|
||||
"\\[ s = \\frac{5 + 6 + 9}{2} = \\frac{20}{2} = 10 \\]\n",
|
||||
"\n",
|
||||
"Now, apply Heron's formula:\n",
|
||||
"\n",
|
||||
"\\[ \\text{Area} = \\sqrt{10(10-5)(10-6)(10-9)} \\]\n",
|
||||
"\\[ \\text{Area} = \\sqrt{10 \\cdot 5 \\cdot 4 \\cdot 1} \\]\n",
|
||||
"\\[ \\text{Area} = \\sqrt{200} \\]\n",
|
||||
"\\[ \\text{Area} = 10\\sqrt{2} \\]\n",
|
||||
"\n",
|
||||
"So, the area of the triangle is \\(10\\sqrt{2}\\) square units."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"To find the area of the triangle, we can use Heron's formula. First, we need to find the semi-perimeter of the triangle.\n",
|
||||
"\n",
|
||||
"The sides of the triangle are 6, 5, and 9.\n",
|
||||
"\n",
|
||||
"1. Calculate the semi-perimeter \\( s \\):\n",
|
||||
"\\[ s = \\frac{a + b + c}{2} = \\frac{6 + 5 + 9}{2} = 10 \\]\n",
|
||||
"\n",
|
||||
"2. Use Heron's formula to find the area \\( A \\):\n",
|
||||
"\\[ A = \\sqrt{s(s-a)(s-b)(s-c)} \\]\n",
|
||||
"\n",
|
||||
"Substitute the values:\n",
|
||||
"\\[ A = \\sqrt{10(10-6)(10-5)(10-9)} \\]\n",
|
||||
"\\[ A = \\sqrt{10 \\cdot 4 \\cdot 5 \\cdot 1} \\]\n",
|
||||
"\\[ A = \\sqrt{200} \\]\n",
|
||||
"\\[ A = 10\\sqrt{2} \\]\n",
|
||||
"\n",
|
||||
"So, the area of the triangle is \\( 10\\sqrt{2} \\) square units.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -198,7 +192,7 @@
|
||||
" temperature=0.0,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"display(Markdown(response.choices[0].message.content))"
|
||||
"print(response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -214,32 +208,32 @@
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"To find the area of the triangle, we can use Heron's formula. First, we need to find the semi-perimeter of the triangle.\n",
|
||||
"\n",
|
||||
"The sides of the triangle are 6, 5, and 9.\n",
|
||||
"\n",
|
||||
"1. Calculate the semi-perimeter \\( s \\):\n",
|
||||
"\\[ s = \\frac{a + b + c}{2} = \\frac{6 + 5 + 9}{2} = 10 \\]\n",
|
||||
"\n",
|
||||
"2. Use Heron's formula to find the area \\( A \\):\n",
|
||||
"\\[ A = \\sqrt{s(s-a)(s-b)(s-c)} \\]\n",
|
||||
"\n",
|
||||
"Substitute the values:\n",
|
||||
"\\[ A = \\sqrt{10(10-6)(10-5)(10-9)} \\]\n",
|
||||
"\\[ A = \\sqrt{10 \\cdot 4 \\cdot 5 \\cdot 1} \\]\n",
|
||||
"\\[ A = \\sqrt{200} \\]\n",
|
||||
"\\[ A = 10\\sqrt{2} \\]\n",
|
||||
"\n",
|
||||
"So, the area of the triangle is \\( 10\\sqrt{2} \\) square units."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"To find the area of the triangle, we can use Heron's formula. Heron's formula states that the area of a triangle with sides of length \\(a\\), \\(b\\), and \\(c\\) is:\n",
|
||||
"\n",
|
||||
"\\[ \\text{Area} = \\sqrt{s(s-a)(s-b)(s-c)} \\]\n",
|
||||
"\n",
|
||||
"where \\(s\\) is the semi-perimeter of the triangle:\n",
|
||||
"\n",
|
||||
"\\[ s = \\frac{a + b + c}{2} \\]\n",
|
||||
"\n",
|
||||
"For the given triangle, the side lengths are \\(a = 5\\), \\(b = 6\\), and \\(c = 9\\).\n",
|
||||
"\n",
|
||||
"First, calculate the semi-perimeter \\(s\\):\n",
|
||||
"\n",
|
||||
"\\[ s = \\frac{5 + 6 + 9}{2} = \\frac{20}{2} = 10 \\]\n",
|
||||
"\n",
|
||||
"Now, apply Heron's formula:\n",
|
||||
"\n",
|
||||
"\\[ \\text{Area} = \\sqrt{10(10-5)(10-6)(10-9)} \\]\n",
|
||||
"\\[ \\text{Area} = \\sqrt{10 \\cdot 5 \\cdot 4 \\cdot 1} \\]\n",
|
||||
"\\[ \\text{Area} = \\sqrt{200} \\]\n",
|
||||
"\\[ \\text{Area} = 10\\sqrt{2} \\]\n",
|
||||
"\n",
|
||||
"So, the area of the triangle is \\(10\\sqrt{2}\\) square units.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -257,7 +251,7 @@
|
||||
" temperature=0.0,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"display(Markdown(response.choices[0].message.content))"
|
||||
"print(response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -331,7 +325,7 @@
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" \r"
|
||||
" "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -342,6 +336,13 @@
|
||||
"Extracted 218 frames\n",
|
||||
"Extracted audio to data/keynote_recap.mp3\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\r"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -450,43 +451,45 @@
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"## Video Summary\n",
|
||||
"\n",
|
||||
"The video appears to be a presentation from OpenAI's DevDay event. Here is a summary based on the provided frames:\n",
|
||||
"\n",
|
||||
"1. **Introduction**:\n",
|
||||
" - The video starts with the title \"OpenAI DevDay\" and a \"Keynote Recap\" slide.\n",
|
||||
" - The event venue is shown, with attendees gathering and the stage being set up.\n",
|
||||
"\n",
|
||||
"2. **Keynote Presentation**:\n",
|
||||
" - A speaker, likely a representative from OpenAI, takes the stage to deliver the keynote address.\n",
|
||||
" - The presentation covers several key topics and announcements:\n",
|
||||
" - **GPT-4 Turbo**: Introduction of GPT-4 Turbo, highlighting its capabilities and improvements.\n",
|
||||
" - **JSON Mode**: A feature that allows structured data output in JSON format.\n",
|
||||
" - **Function Calling**: Demonstration of how the model can call functions based on user instructions.\n",
|
||||
" - **Enhanced Features**: Discussion on improvements such as increased context length, better control, and enhanced knowledge.\n",
|
||||
" - **DALL-E 3**: Introduction of DALL-E 3, a new version of the image generation model.\n",
|
||||
" - **Custom Models**: Announcement of the ability to create custom models tailored to specific needs.\n",
|
||||
" - **Token Efficiency**: Explanation of the new token efficiency, with 3x less input tokens and 2x less output tokens.\n",
|
||||
" - **API Enhancements**: Overview of new API features, including threading, retrieval, code interpreter, and function calling.\n",
|
||||
"\n",
|
||||
"3. **Closing Remarks**:\n",
|
||||
" - The speaker emphasizes the importance of building with natural language and the potential of the new tools and features.\n",
|
||||
" - The presentation concludes with a thank you to the audience and a final display of the OpenAI DevDay logo.\n",
|
||||
"\n",
|
||||
"4. **Audience Engagement**:\n",
|
||||
" - The video shows the audience's reactions and engagement during the presentation, with applause and focused attention.\n",
|
||||
"\n",
|
||||
"Overall, the video captures the highlights of OpenAI's DevDay event, showcasing new advancements and features in their AI models and tools."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"## Video Summary: OpenAI DevDay Keynote Recap\n",
|
||||
"\n",
|
||||
"The video appears to be a keynote recap from OpenAI's DevDay event. Here are the key points covered in the video:\n",
|
||||
"\n",
|
||||
"1. **Introduction and Event Overview**:\n",
|
||||
" - The video starts with the title \"OpenAI DevDay\" and transitions to \"Keynote Recap.\"\n",
|
||||
" - The event venue is shown, with attendees gathering and the stage set up.\n",
|
||||
"\n",
|
||||
"2. **Keynote Presentation**:\n",
|
||||
" - A speaker, presumably from OpenAI, takes the stage to present.\n",
|
||||
" - The presentation covers various topics related to OpenAI's latest developments and announcements.\n",
|
||||
"\n",
|
||||
"3. **Announcements**:\n",
|
||||
" - **GPT-4 Turbo**: Introduction of GPT-4 Turbo, highlighting its enhanced capabilities and performance.\n",
|
||||
" - **JSON Mode**: A new feature that allows for structured data output in JSON format.\n",
|
||||
" - **Function Calling**: Demonstration of improved function calling capabilities, making interactions more efficient.\n",
|
||||
" - **Context Length and Control**: Enhancements in context length and user control over the model's responses.\n",
|
||||
" - **Better Knowledge Integration**: Improvements in the model's knowledge base and retrieval capabilities.\n",
|
||||
"\n",
|
||||
"4. **Product Demonstrations**:\n",
|
||||
" - **DALL-E 3**: Introduction of DALL-E 3 for advanced image generation.\n",
|
||||
" - **Custom Models**: Announcement of custom models, allowing users to tailor models to specific needs.\n",
|
||||
" - **API Enhancements**: Updates to the API, including threading, retrieval, and code interpreter functionalities.\n",
|
||||
"\n",
|
||||
"5. **Pricing and Token Efficiency**:\n",
|
||||
" - Discussion on GPT-4 Turbo pricing, emphasizing cost efficiency with reduced input and output tokens.\n",
|
||||
"\n",
|
||||
"6. **New Features and Tools**:\n",
|
||||
" - Introduction of new tools and features for developers, including a variety of GPT-powered applications.\n",
|
||||
" - Emphasis on building with natural language and the ease of creating custom applications.\n",
|
||||
"\n",
|
||||
"7. **Closing Remarks**:\n",
|
||||
" - The speaker concludes the presentation, thanking the audience and highlighting the future of OpenAI's developments.\n",
|
||||
"\n",
|
||||
"The video ends with the OpenAI logo and the event title \"OpenAI DevDay.\"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -503,7 +506,7 @@
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"display(Markdown(response.choices[0].message.content))"
|
||||
"print(response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -524,28 +527,23 @@
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"### Summary\n",
|
||||
"\n",
|
||||
"Welcome to OpenAI's first-ever Dev Day. Key announcements include:\n",
|
||||
"\n",
|
||||
"- **GPT-4 Turbo**: A new model supporting up to 128,000 tokens of context, featuring JSON mode for valid JSON responses, improved instruction following, and better knowledge retrieval from external documents or databases. It is also significantly cheaper than GPT-4.\n",
|
||||
"- **New Features**: \n",
|
||||
" - **Dolly 3**, **GPT-4 Turbo with Vision**, and a new **Text-to-Speech model** are now available in the API.\n",
|
||||
" - **Custom Models**: A program where OpenAI researchers help companies create custom models tailored to their specific use cases.\n",
|
||||
" - **Increased Rate Limits**: Doubling tokens per minute for established GPT-4 customers and allowing requests for further rate limit changes.\n",
|
||||
"- **GPTs**: Tailored versions of ChatGPT for specific purposes, programmable through conversation, with options for private or public sharing, and a forthcoming GPT Store.\n",
|
||||
"- **Assistance API**: Includes persistent threads, built-in retrieval, a code interpreter, and improved function calling.\n",
|
||||
"\n",
|
||||
"OpenAI is excited about the future of AI integration and looks forward to seeing what users will create with these new tools. The event concludes with an invitation to return next year for more advancements."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"### Summary\n",
|
||||
"\n",
|
||||
"Welcome to OpenAI's first-ever Dev Day. Key announcements include:\n",
|
||||
"\n",
|
||||
"- **GPT-4 Turbo**: A new model supporting up to 128,000 tokens of context, featuring JSON mode for valid JSON responses, improved instruction following, and better knowledge retrieval from external documents or databases. It is also significantly cheaper than GPT-4.\n",
|
||||
"- **New Features**: \n",
|
||||
" - **Dolly 3**, **GPT-4 Turbo with Vision**, and a new **Text-to-Speech model** are now available in the API.\n",
|
||||
" - **Custom Models**: A program where OpenAI researchers help companies create custom models tailored to their specific use cases.\n",
|
||||
" - **Increased Rate Limits**: Doubling tokens per minute for established GPT-4 customers and allowing requests for further rate limit changes.\n",
|
||||
"- **GPTs**: Tailored versions of ChatGPT for specific purposes, programmable through conversation, with options for private or public sharing, and a forthcoming GPT Store.\n",
|
||||
"- **Assistance API**: Includes persistent threads, built-in retrieval, a code interpreter, and improved function calling.\n",
|
||||
"\n",
|
||||
"OpenAI is excited about the future of AI integration and looks forward to seeing what users will create with these new tools. The event concludes with an invitation to return next year for more advancements.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -568,7 +566,7 @@
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"display(Markdown(response.choices[0].message.content))"
|
||||
"print(response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -587,54 +585,60 @@
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"## Video Summary\n",
|
||||
"\n",
|
||||
"### Event Introduction\n",
|
||||
"- **Title:** OpenAI Dev Day\n",
|
||||
"- **Keynote Recap:** The event begins with a keynote recap, setting the stage for the announcements.\n",
|
||||
"\n",
|
||||
"### Venue and Audience\n",
|
||||
"- **Location:** The event is held at a venue with a sign reading \"OpenAI DevDay.\"\n",
|
||||
"- **Audience:** The venue is filled with attendees, eagerly awaiting the presentations.\n",
|
||||
"\n",
|
||||
"### Key Announcements\n",
|
||||
"1. **GPT-4 Turbo:**\n",
|
||||
" - **Launch:** Introduction of GPT-4 Turbo.\n",
|
||||
" - **Features:** Supports up to 128,000 tokens of context.\n",
|
||||
" - **JSON Mode:** Ensures responses in valid JSON format.\n",
|
||||
" - **Function Calling:** Improved ability to call multiple functions and follow instructions.\n",
|
||||
" - **Knowledge Update:** Knowledge up to April 2023, with ongoing improvements.\n",
|
||||
" - **API Integration:** Available in the API along with DALL-E 3 and a new Text-to-Speech model.\n",
|
||||
" - **Custom Models:** New program for creating custom models tailored to specific use cases.\n",
|
||||
" - **Rate Limits:** Doubling tokens per minute for established GPT-4 customers, with options to request further changes.\n",
|
||||
" - **Pricing:** GPT-4 Turbo is significantly cheaper than GPT-4 (3x less for prompt tokens, 2x less for completion tokens).\n",
|
||||
"\n",
|
||||
"2. **GPTs:**\n",
|
||||
" - **Introduction:** Tailored versions of ChatGPT for specific purposes.\n",
|
||||
" - **Features:** Combine instructions, expanded knowledge, and actions for better performance and control.\n",
|
||||
" - **Ease of Use:** Can be programmed through conversation, no coding required.\n",
|
||||
" - **Customization:** Options to create private GPTs, share publicly, or make them exclusive to a company.\n",
|
||||
" - **GPT Store:** Launching later this month for sharing and discovering GPTs.\n",
|
||||
"\n",
|
||||
"3. **Assistance API:**\n",
|
||||
" - **Features:** Includes persistent threads, built-in retrieval, code interpreter, and improved function calling.\n",
|
||||
" - **Integration:** Designed to integrate intelligence into various applications, providing \"superpowers on demand.\"\n",
|
||||
"\n",
|
||||
"### Closing Remarks\n",
|
||||
"- **Future Outlook:** The technology launched today is just the beginning, with more advancements in the pipeline.\n",
|
||||
"- **Gratitude:** Thanks to the attendees and a promise of more exciting developments in the future.\n",
|
||||
"\n",
|
||||
"### Conclusion\n",
|
||||
"- **Event End:** The event concludes with applause and a final thank you to the audience."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"## Video Summary: OpenAI Dev Day\n",
|
||||
"\n",
|
||||
"### Introduction\n",
|
||||
"- The video begins with the title \"OpenAI Dev Day\" and transitions to a keynote recap.\n",
|
||||
"\n",
|
||||
"### Event Overview\n",
|
||||
"- The event is held at a venue with a sign reading \"OpenAI Dev Day.\"\n",
|
||||
"- Attendees are seen entering and gathering in a large hall.\n",
|
||||
"\n",
|
||||
"### Keynote Presentation\n",
|
||||
"- The keynote speaker introduces the event and announces the launch of GPT-4 Turbo.\n",
|
||||
"- **GPT-4 Turbo**:\n",
|
||||
" - Supports up to 128,000 tokens of context.\n",
|
||||
" - Introduces a new feature called JSON mode for valid JSON responses.\n",
|
||||
" - Improved function calling capabilities.\n",
|
||||
" - Enhanced instruction-following and knowledge retrieval from external documents or databases.\n",
|
||||
" - Knowledge updated up to April 2023.\n",
|
||||
" - Available in the API along with DALL-E 3, GPT-4 Turbo with Vision, and a new Text-to-Speech model.\n",
|
||||
"\n",
|
||||
"### Custom Models\n",
|
||||
"- Launch of a new program called Custom Models.\n",
|
||||
" - Researchers will collaborate with companies to create custom models tailored to specific use cases.\n",
|
||||
" - Higher rate limits and the ability to request changes to rate limits and quotas directly in API settings.\n",
|
||||
"\n",
|
||||
"### Pricing and Performance\n",
|
||||
"- **GPT-4 Turbo**:\n",
|
||||
" - 3x cheaper for prompt tokens and 2x cheaper for completion tokens compared to GPT-4.\n",
|
||||
" - Doubling the tokens per minute for established GPT-4 customers.\n",
|
||||
"\n",
|
||||
"### Introduction of GPTs\n",
|
||||
"- **GPTs**:\n",
|
||||
" - Tailored versions of ChatGPT for specific purposes.\n",
|
||||
" - Combine instructions, expanded knowledge, and actions for better performance and control.\n",
|
||||
" - Can be created without coding, through conversation.\n",
|
||||
" - Options to make GPTs private, share publicly, or create for company use in ChatGPT Enterprise.\n",
|
||||
" - Announcement of the upcoming GPT Store.\n",
|
||||
"\n",
|
||||
"### Assistance API\n",
|
||||
"- **Assistance API**:\n",
|
||||
" - Includes persistent threads for handling long conversation history.\n",
|
||||
" - Built-in retrieval and code interpreter with a working Python interpreter in a sandbox environment.\n",
|
||||
" - Improved function calling.\n",
|
||||
"\n",
|
||||
"### Conclusion\n",
|
||||
"- The speaker emphasizes the potential of integrating intelligence everywhere, providing \"superpowers on demand.\"\n",
|
||||
"- Encourages attendees to return next year, hinting at even more advanced developments.\n",
|
||||
"- The event concludes with thanks to the attendees.\n",
|
||||
"\n",
|
||||
"### Closing\n",
|
||||
"- The video ends with the OpenAI logo and a final thank you message.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -653,7 +657,7 @@
|
||||
"],\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"display(Markdown(response.choices[0].message.content))"
|
||||
"print(response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -684,16 +688,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"Visual QA:Sam Altman used the example about raising windows and turning the radio on to demonstrate the function calling capabilities of the new model. The example illustrated how the model can interpret and execute specific commands by calling appropriate functions, showcasing its ability to handle complex tasks and integrate with external systems or APIs. This feature enhances the model's utility in practical applications by allowing it to perform actions based on user instructions."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Visual QA: \n",
|
||||
"Sam Altman used the example about raising windows and turning the radio on to demonstrate the function calling capability of GPT-4 Turbo. The example illustrated how the model can interpret and execute multiple commands in a more structured and efficient manner. The \"before\" and \"after\" comparison showed how the model can now directly call functions like `raise_windows()` and `radio_on()` based on natural language instructions, showcasing improved control and functionality.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -710,7 +710,7 @@
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"display(Markdown(\"Visual QA:\" + qa_visual_response.choices[0].message.content))"
|
||||
"print(\"Visual QA:\\n\" + qa_visual_response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -719,17 +719,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"Audio QA:\n",
|
||||
"The provided transcription does not include any mention of Sam Altman or an example about raising windows and turning the radio on. Therefore, I cannot provide an answer based on the given transcription."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Audio QA:\n",
|
||||
"The provided transcription does not include any mention of Sam Altman or an example about raising windows and turning the radio on. Therefore, I cannot provide an answer based on the given transcription.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -741,7 +736,7 @@
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"display(Markdown(\"Audio QA:\\n\" + qa_audio_response.choices[0].message.content))"
|
||||
"print(\"Audio QA:\\n\" + qa_audio_response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -750,17 +745,12 @@
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/markdown": [
|
||||
"Both QA:\n",
|
||||
"Sam Altman used the example of raising windows and turning the radio on to demonstrate the improved function calling capabilities of GPT-4 Turbo. The example illustrated how the model can now handle multiple function calls more effectively and follow instructions better. In the demonstration, the model was able to interpret the command to raise the windows and turn the radio on, showing how it can execute multiple actions in response to a single prompt. This highlights the enhanced ability of GPT-4 Turbo to manage complex tasks and provide more accurate and useful responses."
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.Markdown object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Both QA:\n",
|
||||
"Sam Altman used the example of raising windows and turning the radio on to demonstrate the improved function calling capabilities of GPT-4 Turbo. The example illustrated how the model can now handle multiple function calls more effectively and follow instructions better. In the \"before\" scenario, the model had to be prompted separately for each action, whereas in the \"after\" scenario, the model could handle both actions in a single prompt, showcasing its enhanced ability to manage and execute multiple tasks simultaneously.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -779,7 +769,7 @@
|
||||
" ],\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"display(Markdown(\"Both QA:\\n\" + qa_both_response.choices[0].message.content))"
|
||||
"print(\"Both QA:\\n\" + qa_both_response.choices[0].message.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user