Di seguito sono riportati gli articoli scientifici più recenti (ordinati in base alla data di pubblicazione) sul prompt engineering. Aggiorniamo questa guida quotidianamente, in base all'arrivo di nuovi documenti. Ogni settimana inseriamo i riassunti di questi documenti nelle guide precedenti.
- [Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing](https://arxiv.org/abs/2107.13586) (Jul 2021)
- [Chain of Hindsight Aligns Language Models with Feedback](https://arxiv.org/abs/2302.02676) (Febbraio 2023)
- [Language Is Not All You Need: Aligning Perception with Language Models](https://arxiv.org/abs/2302.14045) (Febbraio 2023)
- [Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data](https://arxiv.org/abs/2302.12822) (Febbraio 2023)
- [Active Prompting with Chain-of-Thought for Large Language Models](https://arxiv.org/abs/2302.12246) (Febbraio 2023)
- [More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models](https://arxiv.org/abs/2302.12173) (Febbraio 2023)
- [A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT](https://arxiv.org/abs/2302.11382) (Febbraio 2023)
- [Guiding Large Language Models via Directional Stimulus Prompting](https://arxiv.org/abs/2302.11520) (Febbraio 2023)
- [How Does In-Context Learning Help Prompt Tuning?](https://arxiv.org/abs/2302.11521) (Febbraio 2023)
- [Scalable Prompt Generation for Semi-supervised Learning with Language Models](https://arxiv.org/abs/2302.09236) (Febbraio 2023)
- [Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints](https://arxiv.org/abs/2302.09185) (Febbraio 2023)
- [À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting](https://arxiv.org/abs/2302.07994) (Febbraio 2023)
- [GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks](https://arxiv.org/abs/2302.08043) (Febbraio 2023)
- [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (Febbraio 2023)
- [SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains](https://arxiv.org/abs/2302.06868) (Febbraio 2023)
- [Evaluating the Robustness of Discrete Prompts](https://arxiv.org/abs/2302.05619) (Febbraio 2023)
- [Compositional Exemplars for In-context Learning](https://arxiv.org/abs/2302.05698) (Febbraio 2023)
- [Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery](https://arxiv.org/abs/2302.03668) (Febbraio 2023)
- [Multimodal Chain-of-Thought Reasoning in Language Models](https://arxiv.org/abs/2302.00923) (Febbraio 2023)
- [Large Language Models Can Be Easily Distracted by Irrelevant Context](https://arxiv.org/abs/2302.00093) (Febbraio 2023)
- [Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models](https://arxiv.org/abs/2302.00618) (Febbraio 2023)
- [Progressive Prompts: Continual Learning for Language Models](https://arxiv.org/abs/2301.12314) (Gennaio 2023)
- [Batch Prompting: Efficient Inference with LLM APIs](https://arxiv.org/abs/2301.08721) (Gennaio 2023)
- [Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP](https://arxiv.org/abs/2212.14024) (Dicembre 2022)
- [On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning](https://arxiv.org/abs/2212.08061) (Dicembre 2022)
- [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073) (Dicembre 2022)
- [Successive Prompting for Dicembreomposing Complex Questions](https://arxiv.org/abs/2212.04092) (Dicembre 2022)
- [Large Language Models are reasoners with Self-Verification](https://arxiv.org/abs/2212.09561v1) (Dicembre 2022)
- [Discovering Language Model Behaviors with Model-Written Evaluations](https://arxiv.org/abs/2212.09251) (Dicembre 2022)
- [PAL: Program-aided Language Models](https://arxiv.org/abs/2211.10435) (Novembre 2022)
- [Large Language Models Are Human-Level Prompt Engineers](https://arxiv.org/abs/2211.01910) (Novembre 2022)
- [Ignore Previous Prompt: Attack Techniques For Language Models](https://arxiv.org/abs/2211.09527) (Novembre 2022)
- [Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods](https://arxiv.org/abs/2210.07321) (Novembre 2022)
- [Teaching Algorithmic Reasoning via In-context Learning](https://arxiv.org/abs/2211.09066) (Novembre 2022)
- [Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference](https://arxiv.org/abs/2211.11875) (Novembre 2022)
- [Ask Me Anything: A simple strategy for prompting language models](https://paperswithcode.com/paper/ask-me-anything-a-simple-strategy-for) (Ottobre 2022)
- [Recitation-Augmented Language Models](https://arxiv.org/abs/2210.01296) (Ottobre 2022)
- [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) (Ottobre 2022)
- [Prompting GPT-3 To Be Reliable](https://arxiv.org/abs/2210.09150) (Ottobre 2022)
- [Dicembreomposed Prompting: A Modular Approach for Solving Complex Tasks](https://arxiv.org/abs/2210.02406) (Ottobre 2022)
- [Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought](https://arxiv.org/abs/2210.01240v3) (Ottobre 2022)
- [Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples](https://arxiv.org/abs/2209.02128) (Settembre 2022)
- [Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning](https://arxiv.org/abs/2209.14610) (Settembre 2022)
- [Promptagator: Few-shot Dense Retrieval From 8 Examples](https://arxiv.org/abs/2209.11755) (Settembre 2022)
- [Atlas: Few-shot Learning with Retrieval Augmented Language Models](https://arxiv.org/abs/2208.03299) (Novembre 2022)
- [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/abs/2207.05987) (Luglio 2022)
- [On the Advance of Making Language Models Better Reasoners](https://arxiv.org/abs/2206.02336) (Giugnoe 2022)
- [Large Language Models are Zero-Shot Reasoners](https://arxiv.org/abs/2205.11916) (Maggio 2022)
- [MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning](https://arxiv.org/abs/2205.00445) (Maggio 2022)
- [Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?](https://arxiv.org/abs/2202.12837) (Febbraio 2022)
- [Chain of Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903) (Gennaio 2022)
- [Show Your Work: Scratchpads for Intermediate Computation with Language Models](https://arxiv.org/abs/2112.00114) (Novembre 2021)
- [AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts](https://arxiv.org/abs/2110.01691) (Ottobre 2021)
- [Generated Knowledge Prompting for Commonsense Reasoning](https://arxiv.org/abs/2110.08387) (Ottobre 2021)
- [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207) (Ottobre 2021)
- [Reframing Instructional Prompts to GPTk's Language](https://arxiv.org/abs/2109.07830) (Settembre 2021)
- [Making Pre-trained Language Models Better Few-shot Learners](https://aclanthology.org/2021.acl-long.295) (Agosto 2021)
- [Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity](https://arxiv.org/abs/2104.08786) (Aprile 2021)
- [BERTese: Learning to Speak to BERT](https://aclanthology.org/2021.eacl-main.316) (Aprile 2021)
- [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691) (Aprile 2021)
- [Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm](https://arxiv.org/abs/2102.07350) (Febbraio 2021)
- [Calibrate Before Use: Improving Few-Shot Performance of Language Models](https://arxiv.org/abs/2102.09690) (Febbraio 2021)
- [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://arxiv.org/abs/2101.00190) (Gennaio 2021)
- [Learning to Generate Task-Specific Adapters from Task Description](https://arxiv.org/abs/2101.00420) (Gennaio 2021)
- [Making Pre-trained Language Models Better Few-shot Learners](https://arxiv.org/abs/2012.15723) (Dicembre 2020)
- [Learning from Task Descriptions](https://aclanthology.org/2020.emnlp-main.105/) (Novembre 2020)
- [AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts](https://arxiv.org/abs/2010.15980) (Ottobre 2020)
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165) (Maggio 2020)
- [How Can We Know What Language Models Know?](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00324/96460/How-Can-We-Know-What-Language-Models-Know) (Luglio 2020)
- [Scaling Laws for Neural Language Models](https://arxiv.org/abs/2001.08361) (Gennaio 2020)
- [Assessing Language Model Deployment with Risk Cards]() (Aprile 2023)
- [Enhancing Large Language Models with Climate Resources](https://arxiv.org/abs/2304.00116) (Marzo 2023)
- [BloombergGPT: A Large Language Model for Finance](https://arxiv.org/abs/2303.17564) (Marzo 2023)
- [Medical Intervention Duration Estimation Using Language-enhanced Transformer Encoder with Medical Prompts](https://arxiv.org/abs/2303.17408) (Marzo 2023)
- [Soft-prompt tuning to predict lung cancer using priMarzoy care free-text Dutch medical notes](https://arxiv.org/abs/2303.15846) (Marzo 2023)
- [TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs](https://arxiv.org/abs/2303.16434) (Marzo 2023)
- [Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning](https://arxiv.org/abs/2303.16445) (Marzo 2023)
- [Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese Machine Translation: A Case Study on Attributive Clauses](https://arxiv.org/abs/2303.15587) (Marzo 2023)
- [Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning](https://arxiv.org/abs/2303.14375) (Marzo 2023)
- [Debiasing Scores and Prompts of 2D Diffusion for Robust Text-to-3D Generation](https://arxiv.org/abs/2303.15413) (Marzo 2023)
- [Zero-shot Model Diagnosis](https://arxiv.org/abs/2303.15441#) (Marzo 2023)
- [Prompting Large Language Models to Generate Code-Mixed Texts: The Case of South East Asian Languages](https://arxiv.org/abs/2303.13592) (Marzo 2023)
- [SPeC: A Soft Prompt-Based Calibration on Mitigating Performance Variability in Clinical Notes SumMarzoization](https://arxiv.org/abs/2303.13035) (Marzo 2023)
- [Large Language Models and Simple, Stupid Bugs](https://arxiv.org/abs/2303.11455) (Marzo 2023)
- [SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models](https://arxiv.org/abs/2303.08896) (Marzo 2023)
- [Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification](https://arxiv.org/abs/2303.07142) (Marzo 2023)
- [ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction](https://arxiv.org/abs/2303.05063) (Marzo 2023)
- [MathPrompter: Mathematical Reasoning using Large Language Models](https://arxiv.org/abs/2303.05398) (Marzo 2023)
- [Prompt-Based Learning for Thread Structure Prediction in Cybersecurity Forums](https://arxiv.org/abs/2303.05400) (Marzo 2023)
- [Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting](https://arxiv.org/abs/2303.03199) (Marzo 2023)
- [Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering](https://arxiv.org/abs/2303.01903) (Marzo 2023)
- [Goal Driven Discovery of Distributional Differences via Language Descriptions](https://arxiv.org/abs/2302.14233) (Febbraio 2023)
- [Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models](https://arxiv.org/abs/2302.13439) (Febbraio 2023)
- [TabGenie: A Toolkit for Table-to-Text Generation](https://arxiv.org/abs/2302.14169) (Febbraio 2023)
- [SGL-PT: A Strong Graph Learner with Graph Prompt Tuning](https://arxiv.org/abs/2302.12449) (Febbraio 2023)
- [Few-Shot Table-to-Text Generation with Prompt-based Adapter](https://arxiv.org/abs/2302.12468) (Febbraio 2023)
- [Language Models Are Few-shot Learners for Prognostic Prediction](https://arxiv.org/abs/2302.12692) (Febbraio 2023)
- [STA: Self-controlled Text Augmentation for Improving Text Classifications](https://arxiv.org/abs/2302.12784) (Febbraio 2023)
- [Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback](https://arxiv.org/abs/2302.12813) (Febbraio 2023)
- [How Generative AI models such as ChatGPT can be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study](https://arxiv.org/abs/2302.10916) (Febbraio 2023)
- [Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales](https://arxiv.org/abs/2302.08961) (Febbraio 2023)
- [LabelPrompt: Effective Prompt-based Learning for Relation Classification](https://arxiv.org/abs/2302.08068) (Febbraio 2023)
- [Language Model Crossover: Variation through Few-Shot Prompting](https://arxiv.org/abs/2302.09236) (Febbraio 2023)
- [Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition](https://arxiv.org/abs/2302.08102) (Febbraio 2023)
- [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (Febbraio 2023)
- [Prompting for Multimodal Hateful Meme Classification](https://arxiv.org/abs/2302.04156) (Febbraio 2023)
- [PLACES: Prompting Language Models for Social Conversation Synthesis](https://arxiv.org/abs/2302.03269) (Febbraio 2023)
- [Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation](https://arxiv.org/abs/2302.01441) (Febbraio 2023)
- [Crawling the Internal Knowledge-Base of Language Models](https://arxiv.org/abs/2301.12810) (Gennaio 2023)