mirror of
https://github.com/openai/openai-cookbook
synced 2024-11-19 15:25:37 +00:00
README typo fix: go -> lifts
This commit is contained in:
parent
35cde4e4c6
commit
9d9fe492b6
@ -100,7 +100,7 @@ People are writing great tools and papers for improving outputs from GPT. Here a
|
|||||||
|
|
||||||
### Papers on advanced prompting to improve reasoning
|
### Papers on advanced prompting to improve reasoning
|
||||||
|
|
||||||
- [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022)](https://arxiv.org/abs/2201.11903): Using few-shot prompts to ask models to think step by step improves their reasoning. PaLM's score on math word problems (GSM8K) go from 18% to 57%.
|
- [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022)](https://arxiv.org/abs/2201.11903): Using few-shot prompts to ask models to think step by step improves their reasoning. PaLM's score on math word problems (GSM8K) rises from 18% to 57%.
|
||||||
- [Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022)](https://arxiv.org/abs/2203.11171): Taking votes from multiple outputs improves accuracy even more. Voting across 40 outputs raises PaLM's score on math word problems further, from 57% to 74%, and `code-davinci-002`'s from 60% to 78%.
|
- [Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022)](https://arxiv.org/abs/2203.11171): Taking votes from multiple outputs improves accuracy even more. Voting across 40 outputs raises PaLM's score on math word problems further, from 57% to 74%, and `code-davinci-002`'s from 60% to 78%.
|
||||||
- [Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023)](https://arxiv.org/abs/2305.10601): Searching over trees of step by step reasoning helps even more than voting over chains of thought. It lifts `GPT-4`'s scores on creative writing and crosswords.
|
- [Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023)](https://arxiv.org/abs/2305.10601): Searching over trees of step by step reasoning helps even more than voting over chains of thought. It lifts `GPT-4`'s scores on creative writing and crosswords.
|
||||||
- [Language Models are Zero-Shot Reasoners (2022)](https://arxiv.org/abs/2205.11916): Telling instruction-following models to think step by step improves their reasoning. It lifts `text-davinci-002`'s score on math word problems (GSM8K) from 13% to 41%.
|
- [Language Models are Zero-Shot Reasoners (2022)](https://arxiv.org/abs/2205.11916): Telling instruction-following models to think step by step improves their reasoning. It lifts `text-davinci-002`'s score on math word problems (GSM8K) from 13% to 41%.
|
||||||
@ -125,4 +125,4 @@ If there are examples or guides you'd like to see, feel free to suggest them on
|
|||||||
[openai help center]: https://help.openai.com/en/
|
[openai help center]: https://help.openai.com/en/
|
||||||
[openai examples]: https://beta.openai.com/examples
|
[openai examples]: https://beta.openai.com/examples
|
||||||
[openai blog]: https://openai.com/blog/
|
[openai blog]: https://openai.com/blog/
|
||||||
[issues page]: https://github.com/openai/openai-cookbook/issues
|
[issues page]: https://github.com/openai/openai-cookbook/issues
|
||||||
|
Loading…
Reference in New Issue
Block a user