A new paper by [Lee et al. (2024)](https://arxiv.org/abs/2404.03414) proposes to improve reasoning in LLMs using small language models.
It first applies knowledge distillation to a small LM with rationales generated by the large LM with the hope of narrowing the gap in reasoning capabilities.