mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-10 01:13:36 +00:00
763 lines
20 KiB
Plaintext
763 lines
20 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Getting Started with Prompt Engineering\n",
|
|
"by DAIR.AI | Elvis Saravia\n",
|
|
"\n",
|
|
"\n",
|
|
"This notebook contains examples and exercises to learning about prompt engineering.\n",
|
|
"\n",
|
|
"We will be using the [OpenAI APIs](https://platform.openai.com/) for all examples. I am using the default settings `temperature=0.7` and `top-p=1`"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"---"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## 1. Prompt Engineering Basics\n",
|
|
"\n",
|
|
"Objectives\n",
|
|
"- Load the libraries\n",
|
|
"- Review the format\n",
|
|
"- Cover basic prompts\n",
|
|
"- Review common use cases"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Below we are loading the necessary libraries, utilities, and configurations."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%%capture\n",
|
|
"# update or install the necessary libraries\n",
|
|
"!pip install --upgrade openai\n",
|
|
"!pip install --upgrade langchain\n",
|
|
"!pip install --upgrade python-dotenv"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import openai\n",
|
|
"import os\n",
|
|
"import IPython\n",
|
|
"from langchain.llms import OpenAI\n",
|
|
"from dotenv import load_dotenv"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Load environment variables. You can use anything you like but I used `python-dotenv`. Just create a `.env` file with your `OPENAI_API_KEY` then load it."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"load_dotenv()\n",
|
|
"\n",
|
|
"# API configuration\n",
|
|
"openai.api_key = os.getenv(\"OPENAI_API_KEY\")\n",
|
|
"\n",
|
|
"# for LangChain\n",
|
|
"os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"OPENAI_API_KEY\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"def set_open_params(\n",
|
|
" model=\"gpt-3.5-turbo\",\n",
|
|
" temperature=0.7,\n",
|
|
" max_tokens=256,\n",
|
|
" top_p=1,\n",
|
|
" frequency_penalty=0,\n",
|
|
" presence_penalty=0,\n",
|
|
"):\n",
|
|
" \"\"\" set openai parameters\"\"\"\n",
|
|
"\n",
|
|
" openai_params = {} \n",
|
|
"\n",
|
|
" openai_params['model'] = model\n",
|
|
" openai_params['temperature'] = temperature\n",
|
|
" openai_params['max_tokens'] = max_tokens\n",
|
|
" openai_params['top_p'] = top_p\n",
|
|
" openai_params['frequency_penalty'] = frequency_penalty\n",
|
|
" openai_params['presence_penalty'] = presence_penalty\n",
|
|
" return openai_params\n",
|
|
"\n",
|
|
"def get_completion(params, messages):\n",
|
|
" \"\"\" GET completion from openai api\"\"\"\n",
|
|
"\n",
|
|
" response = openai.chat.completions.create(\n",
|
|
" model = params['model'],\n",
|
|
" messages = messages,\n",
|
|
" temperature = params['temperature'],\n",
|
|
" max_tokens = params['max_tokens'],\n",
|
|
" top_p = params['top_p'],\n",
|
|
" frequency_penalty = params['frequency_penalty'],\n",
|
|
" presence_penalty = params['presence_penalty'],\n",
|
|
" )\n",
|
|
" return response"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Basic prompt example:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# basic example\n",
|
|
"params = set_open_params()\n",
|
|
"\n",
|
|
"prompt = \"The sky is\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'blue.'"
|
|
]
|
|
},
|
|
"execution_count": 16,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"response.choices[0].message.content"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Try with different temperature to compare results:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"blue during the day and black at night."
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"params = set_open_params(temperature=0)\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 1.1 Text Summarization"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"Antibiotics are medications that treat bacterial infections by either killing the bacteria or stopping their reproduction, but they are ineffective against viral infections and misuse can lead to antibiotic resistance."
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"params = set_open_params(temperature=0.7)\n",
|
|
"prompt = \"\"\"Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body's immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance. \n",
|
|
"\n",
|
|
"Explain the above in one sentence:\"\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Exercise: Instruct the model to explain the paragraph in one sentence like \"I am 5\". Do you see any differences?"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 1.2 Question Answering"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"Mice."
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 19,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\"\"Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer.\n",
|
|
"\n",
|
|
"Context: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.\n",
|
|
"\n",
|
|
"Question: What was OKT3 originally sourced from?\n",
|
|
"\n",
|
|
"Answer:\"\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Context obtained from here: https://www.nature.com/articles/d41586-023-00400-x"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Exercise: Edit prompt and get the model to respond that it isn't sure about the answer. "
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 1.3 Text Classification"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 21,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"Neutral"
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 21,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\"\"Classify the text into neutral, negative or positive.\n",
|
|
"\n",
|
|
"Text: I think the food was okay.\n",
|
|
"\n",
|
|
"Sentiment:\"\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Exercise: Modify the prompt to instruct the model to provide an explanation to the answer selected. "
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 1.4 Role Playing"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 22,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"Certainly! Black holes are formed from the remnants of massive stars that have exhausted their nuclear fuel and undergone a supernova explosion. During this explosion, the outer layers of the star are blown away, leaving behind a dense core known as a stellar remnant. If the mass of the stellar remnant is above a certain threshold, called the Tolman-Oppenheimer-Volkoff limit, gravity becomes so strong that the core collapses in on itself, forming a black hole. This collapse is driven by the inward pull of gravity, and it results in a region of space where the gravitational field is so strong that nothing, not even light, can escape from it. This region is known as the event horizon."
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 22,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\"\"The following is a conversation with an AI research assistant. The assistant tone is technical and scientific.\n",
|
|
"\n",
|
|
"Human: Hello, who are you?\n",
|
|
"AI: Greeting! I am an AI research assistant. How can I help you today?\n",
|
|
"Human: Can you tell me about the creation of blackholes?\n",
|
|
"AI:\"\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Exercise: Modify the prompt to instruct the model to keep AI responses concise and short."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 1.5 Code Generation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 23,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"SELECT StudentName\n",
|
|
"FROM students\n",
|
|
"WHERE DepartmentId = (SELECT DepartmentId FROM departments WHERE DepartmentName = 'Computer Science')"
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 23,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\\\"\\\"\\\"\\nTable departments, columns = [DepartmentId, DepartmentName]\\nTable students, columns = [DepartmentId, StudentId, StudentName]\\nCreate a MySQL query for all students in the Computer Science Department\\n\\\"\\\"\\\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 1.6 Reasoning"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"To solve this problem, we need to follow these steps:\n",
|
|
"\n",
|
|
"Step 1: Identify the odd numbers in the given group. The odd numbers in the group are 15, 5, 13, 7, and 1.\n",
|
|
"\n",
|
|
"Step 2: Add the odd numbers together. 15 + 5 + 13 + 7 + 1 = 41.\n",
|
|
"\n",
|
|
"Step 3: Determine whether the sum is odd or even. In this case, the sum is 41, which is an odd number.\n",
|
|
"\n",
|
|
"Therefore, the sum of the odd numbers in the given group is an odd number."
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\"\"The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1. \n",
|
|
"\n",
|
|
"Solve by breaking the problem into steps. First, identify the odd numbers, add them, and indicate whether the result is odd or even.\"\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Exercise: Improve the prompt to have a better structure and output format."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## 2. Advanced Prompting Techniques\n",
|
|
"\n",
|
|
"Objectives:\n",
|
|
"\n",
|
|
"- Cover more advanced techniques for prompting: few-shot, chain-of-thoughts,..."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 2.2 Few-shot prompts"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 25,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"The answer is False."
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 25,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\"\"The odd numbers in this group add up to an even number: 4, 8, 9, 15, 12, 2, 1.\n",
|
|
"A: The answer is False.\n",
|
|
"\n",
|
|
"The odd numbers in this group add up to an even number: 17, 10, 19, 4, 8, 12, 24.\n",
|
|
"A: The answer is True.\n",
|
|
"\n",
|
|
"The odd numbers in this group add up to an even number: 16, 11, 14, 4, 8, 13, 24.\n",
|
|
"A: The answer is True.\n",
|
|
"\n",
|
|
"The odd numbers in this group add up to an even number: 17, 9, 10, 12, 13, 4, 2.\n",
|
|
"A: The answer is False.\n",
|
|
"\n",
|
|
"The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1. \n",
|
|
"A:\"\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 2.3 Chain-of-Thought (CoT) Prompting"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 26,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"Adding all the odd numbers (15, 5, 13, 7, 1) gives 41. The answer is False."
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 26,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\"\"The odd numbers in this group add up to an even number: 4, 8, 9, 15, 12, 2, 1.\n",
|
|
"A: Adding all the odd numbers (9, 15, 1) gives 25. The answer is False.\n",
|
|
"\n",
|
|
"The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1. \n",
|
|
"A:\"\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 2.4 Zero-shot CoT"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/markdown": [
|
|
"Step 1: Bought 10 apples.\n",
|
|
"Step 2: Gave 2 apples to the neighbor and 2 apples to the repairman.\n",
|
|
"Remaining apples: 10 - 2 - 2 = 6 apples.\n",
|
|
"Step 3: Bought 5 more apples.\n",
|
|
"Total apples now: 6 + 5 = 11 apples.\n",
|
|
"Step 4: Ate 1 apple.\n",
|
|
"Remaining apples: 11 - 1 = 10 apples.\n",
|
|
"\n",
|
|
"Final answer: You remained with 10 apples."
|
|
],
|
|
"text/plain": [
|
|
"<IPython.core.display.Markdown object>"
|
|
]
|
|
},
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\"\"I went to the market and bought 10 apples. I gave 2 apples to the neighbor and 2 to the repairman. I then went and bought 5 more apples and ate 1. How many apples did I remain with?\n",
|
|
"\n",
|
|
"Let's think step by step.\"\"\"\n",
|
|
"\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": prompt\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = get_completion(params, messages)\n",
|
|
"IPython.display.Markdown(response.choices[0].message.content)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 2.5 Self-Consistency\n",
|
|
"As an exercise, check examples in our [guide](https://github.com/dair-ai/Prompt-Engineering-Guide/blob/main/guides/prompts-advanced-usage.md#self-consistency) and try them here. \n",
|
|
"\n",
|
|
"### 2.6 Generate Knowledge Prompting\n",
|
|
"\n",
|
|
"As an exercise, check examples in our [guide](https://github.com/dair-ai/Prompt-Engineering-Guide/blob/main/guides/prompts-advanced-usage.md#generated-knowledge-prompting) and try them here. "
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"---"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "promptlecture",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.18"
|
|
},
|
|
"orig_nbformat": 4,
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "f38e0373277d6f71ee44ee8fea5f1d408ad6999fda15d538a69a99a1665a839d"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|