pull/429/head
Elvis Saravia 7 months ago
parent f8a3c5b8de
commit 96d93fc05e

@ -11,12 +11,12 @@ Reasoning tasks could include tasks such as mathematical reasoning, logical reas
*Figure source: [Sun et al., 2023](https://arxiv.org/pdf/2212.09597.pdf)*
## How Can Reasoning be Elicited in LLMs?
Reasoning in LLMs can be elicited and enhanced using many different prompting approaches. [Qiao et al. (2023)](https://arxiv.org/abs/2212.09597) categorized reasoning methods research into two different branches, namely reasoning enhanced strategy and knowledge enhancement reasoning. Reasoning strategies include prompt engineering, process optimization, and external engines. For instance, single-stage prompting strategies include Chain-of-Thought and Active-Prompt. A full taxonomy of reasoning with language model prompting can be found in the paper and summarized in the figure below:
Reasoning in LLMs can be elicited and enhanced using many different prompting approaches. [Qiao et al. (2023)](https://arxiv.org/abs/2212.09597) categorized reasoning methods research into two different branches, namely reasoning enhanced strategy and knowledge enhancement reasoning. Reasoning strategies include prompt engineering, process optimization, and external engines. For instance, single-stage prompting strategies include [Chain-of-Thought](https://www.promptingguide.ai/techniques/cot) and [Active-Prompt](https://www.promptingguide.ai/techniques/activeprompt). A full taxonomy of reasoning with language model prompting can be found in the paper and summarized in the figure below:
!["Reasoning Taxonomy"](../../img/research/reasoning-taxonomy.png)
*Figure source: [Qiao et al., 2023](https://arxiv.org/pdf/2212.09597.pdf)*
[Huang et al. (2023)]() also explain a summary of techniques to improve or elicit reasoning in LLMs such as GPT-3. These techniques range from using fully supervised fine-tuning models trained on explanation datasets to prompting methods like chain-of-thought, problem decomposition, and in-context learning. Below is a summary of the techniques described in the paper:
[Huang et al. (2023)]() also explain a summary of techniques to improve or elicit reasoning in LLMs such as GPT-3. These techniques range from using fully supervised fine-tuning models trained on explanation datasets to prompting methods such as chain-of-thought, problem decomposition, and in-context learning. Below is a summary of the techniques described in the paper:
!["Reasoning Techniques"](../../img/research/reasoning-techniques.png)
*Figure source: [Huang et al., 2023](https://arxiv.org/pdf/2212.10403.pdf)*

Loading…
Cancel
Save