mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-18 03:25:39 +00:00
Merge pull request #37 from slezica/patch-1
Move a paragraph to a better location in prompts-advanced-usage
This commit is contained in:
commit
daa32ef83e
@ -79,9 +79,6 @@ The answer is True.
|
||||
|
||||
That didn't work. It seems like basic standard prompting is not enough to get reliable responses for this type of reasoning problem. The example above provides basic information on the task, even with the examples. If you take a closer look at the task, it does involve more reasoning steps.
|
||||
|
||||
More recently, chain-of-thought (CoT) prompting has been popularized to address more complex arithmetic,
|
||||
commonsense, and symbolic reasoning tasks. So let's talk about CoT next and see if we can solve the above task.
|
||||
|
||||
Following the findings from [Min et al. (2022)](https://arxiv.org/abs/2202.12837), here a few more tips about demonstrations/exemplars when doing few-shot:
|
||||
|
||||
- the label space and the distribution of the input text specified by the demonstrations are both key (regardless of whether the labels are correct
|
||||
@ -124,6 +121,9 @@ There is no consistency in the format above but the model still predicted the co
|
||||
|
||||
Overall, it seems that providing examples is useful in some places. When zero-shot prompting and few-shot prompting are not sufficient, it might mean that the whatever was learned by the model isn't enough to do well at the task. From here it is recommended to start thinking about fine-tuning your own models.
|
||||
|
||||
More recently, chain-of-thought (CoT) prompting has been popularized to address more complex arithmetic,
|
||||
commonsense, and symbolic reasoning tasks. So let's talk about CoT next and see if we can solve the above task.
|
||||
|
||||
---
|
||||
|
||||
## Chain-of-Thought Prompting
|
||||
|
Loading…
Reference in New Issue
Block a user