mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-08 01:10:29 +00:00
Merge pull request #282 from liuliuOD/fix/technique_to_improve_reliability
[Fix] typo in techniques_to_improve_reliability.md
This commit is contained in:
commit
4b8069e494
@ -88,7 +88,7 @@ Solution:
|
|||||||
(c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick
|
(c) Unknown; there is not enough information to determine whether Colonel Mustard was in the observatory with the candlestick
|
||||||
```
|
```
|
||||||
|
|
||||||
Although clues 3 and 5 establish that Colonel Mustard was the only person in the observatory and that the person in the observatory had the candlestick, the models fails to combine them into a correct answer of (a) Yes.
|
Although clues 3 and 5 establish that Colonel Mustard was the only person in the observatory and that the person in the observatory had the candlestick, the model fails to combine them into a correct answer of (a) Yes.
|
||||||
|
|
||||||
However, instead of asking for the answer directly, we can split the task into three pieces:
|
However, instead of asking for the answer directly, we can split the task into three pieces:
|
||||||
|
|
||||||
@ -274,7 +274,7 @@ To learn more, read the [full paper](https://arxiv.org/abs/2201.11903).
|
|||||||
|
|
||||||
#### Implications
|
#### Implications
|
||||||
|
|
||||||
One advantage of the few-shot example-based approach relative to the `Let's think step by step` technique is that you can more easily specify the format, length, and style of reasoning that you want the model to perform before landing on its final answer. This can be be particularly helpful in cases where the model isn't initially reasoning in the right way or depth.
|
One advantage of the few-shot example-based approach relative to the `Let's think step by step` technique is that you can more easily specify the format, length, and style of reasoning that you want the model to perform before landing on its final answer. This can be particularly helpful in cases where the model isn't initially reasoning in the right way or depth.
|
||||||
|
|
||||||
### Fine-tuned
|
### Fine-tuned
|
||||||
|
|
||||||
@ -282,7 +282,7 @@ One advantage of the few-shot example-based approach relative to the `Let's thin
|
|||||||
|
|
||||||
In general, to eke out maximum performance on a task, you'll need to fine-tune a custom model. However, fine-tuning a model using explanations may take thousands of example explanations, which are costly to write.
|
In general, to eke out maximum performance on a task, you'll need to fine-tune a custom model. However, fine-tuning a model using explanations may take thousands of example explanations, which are costly to write.
|
||||||
|
|
||||||
In 2022, Eric Zelikman and Yuhuai Wu et al. published a clever procedure for using a few-shot prompt to generate a dataset of explanations that could be used to fine-tune a model. The idea is to use a few-shot prompt to generate candidate explanations, and only keep the explanations that produce the correct answer. Then, to get additional explanations for some of the incorrect answers, retry the the few-shot prompt but with correct answers given as part of the question. The authors called their procedure STaR (Self-taught Reasoner):
|
In 2022, Eric Zelikman and Yuhuai Wu et al. published a clever procedure for using a few-shot prompt to generate a dataset of explanations that could be used to fine-tune a model. The idea is to use a few-shot prompt to generate candidate explanations, and only keep the explanations that produce the correct answer. Then, to get additional explanations for some of the incorrect answers, retry the few-shot prompt but with correct answers given as part of the question. The authors called their procedure STaR (Self-taught Reasoner):
|
||||||
|
|
||||||
[![STaR procedure](images/star_fig1.png)
|
[![STaR procedure](images/star_fig1.png)
|
||||||
<br>Source: *STaR: Bootstrapping Reasoning With Reasoning* by Eric Zelikman and Yujuai Wu et al. (2022)](https://arxiv.org/abs/2203.14465)
|
<br>Source: *STaR: Bootstrapping Reasoning With Reasoning* by Eric Zelikman and Yujuai Wu et al. (2022)](https://arxiv.org/abs/2203.14465)
|
||||||
|
Loading…
Reference in New Issue
Block a user