mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-02 15:40:13 +00:00
Update README.md
This commit is contained in:
parent
6f971ebd89
commit
fd1c2a112a
@ -22,6 +22,7 @@ This guide contains a set of papers, learning guides, and tools related to promp
|
||||
- [Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing](https://arxiv.org/abs/2107.13586) (Jul 2021)
|
||||
|
||||
- Approaches/Techniques:
|
||||
- [Progressive Prompts: Continual Learning for Language Models](https://arxiv.org/abs/2301.12314)
|
||||
- [Batch Prompting: Efficient Inference with LLM APIs](https://arxiv.org/abs/2301.08721) (Jan 2023)
|
||||
- [Successive Prompting for Decomposing Complex Questions](https://arxiv.org/abs/2212.04092) (Dec 2022)
|
||||
- [Structured Prompting: Scaling In-Context Learning to 1,000 Examples](https://arxiv.org/abs/2212.06713) (Dec 2022)
|
||||
@ -43,6 +44,7 @@ This guide contains a set of papers, learning guides, and tools related to promp
|
||||
- [Making Pre-trained Language Models Better Few-shot Learners](https://aclanthology.org/2021.acl-long.295/) (Aug 2021)
|
||||
- [Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity](https://arxiv.org/abs/2104.08786) (April 2021)
|
||||
- [BERTese: Learning to Speak to BERT](https://aclanthology.org/2021.eacl-main.316/) (April 2021)
|
||||
- [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691)
|
||||
- [Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm](https://arxiv.org/abs/2102.07350) (Feb 2021)
|
||||
- [Calibrate Before Use: Improving Few-Shot Performance of Language Models](https://arxiv.org/abs/2102.09690) (Feb 2021)
|
||||
- [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://arxiv.org/abs/2101.00190) (Jan 2021)
|
||||
|
Loading…
Reference in New Issue
Block a user