diff --git a/pages/papers.en.mdx b/pages/papers.en.mdx index a98ed36..b08041b 100644 --- a/pages/papers.en.mdx +++ b/pages/papers.en.mdx @@ -4,374 +4,377 @@ The following are the latest papers (sorted by release date) on prompt engineeri ## Overviews - - [Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation](https://arxiv.org/abs/2305.16938) (May 2023) - - [Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study](https://arxiv.org/abs/2305.13860) (May 2023) - - [Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond](https://arxiv.org/abs/2304.13712) (April 2023) - - [Tool Learning with Foundation Models](https://arxiv.org/abs/2304.08354) (April 2023) - - [One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era](https://arxiv.org/abs/2304.06488) (April 2023) - - [A Bibliometric Review of Large Language Models Research from 2017 to 2023](https://arxiv.org/abs/2304.02020) (April 2023) - - [A Survey of Large Language Models](https://arxiv.org/abs/2303.18223) (April 2023) - - [Nature Language Reasoning, A Survey](https://arxiv.org/abs/2303.14725) (Mar 2023) - - [Augmented Language Models: a Survey](https://arxiv.org/abs/2302.07842) (Feb 2023) - - [A Survey for In-context Learning](https://arxiv.org/abs/2301.00234) (Dec 2022) - - [Towards Reasoning in Large Language Models: A Survey](https://arxiv.org/abs/2212.10403) (Dec 2022) - - [Reasoning with Language Model Prompting: A Survey](https://arxiv.org/abs/2212.09597) (Dec 2022) - - [Emergent Abilities of Large Language Models](https://arxiv.org/abs/2206.07682) (Jun 2022) - - [A Taxonomy of Prompt Modifiers for Text-To-Image Generation](https://arxiv.org/abs/2204.13988) (Apr 2022) - - [Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing](https://arxiv.org/abs/2107.13586) (Jul 2021) +- [Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation](https://arxiv.org/abs/2305.16938) (May 2023) +- [Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study](https://arxiv.org/abs/2305.13860) (May 2023) +- [Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond](https://arxiv.org/abs/2304.13712) (April 2023) +- [Tool Learning with Foundation Models](https://arxiv.org/abs/2304.08354) (April 2023) +- [One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era](https://arxiv.org/abs/2304.06488) (April 2023) +- [A Bibliometric Review of Large Language Models Research from 2017 to 2023](https://arxiv.org/abs/2304.02020) (April 2023) +- [A Survey of Large Language Models](https://arxiv.org/abs/2303.18223) (April 2023) +- [Nature Language Reasoning, A Survey](https://arxiv.org/abs/2303.14725) (March 2023) +- [Augmented Language Models: a Survey](https://arxiv.org/abs/2302.07842) (February 2023) +- [A Survey for In-context Learning](https://arxiv.org/abs/2301.00234) (December 2022) +- [Towards Reasoning in Large Language Models: A Survey](https://arxiv.org/abs/2212.10403) (December 2022) +- [Reasoning with Language Model Prompting: A Survey](https://arxiv.org/abs/2212.09597) (December 2022) +- [Emergent Abilities of Large Language Models](https://arxiv.org/abs/2206.07682) (June 2022) +- [A Taxonomy of Prompt Modifiers for Text-To-Image Generation](https://arxiv.org/abs/2204.13988) (April 2022) +- [Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing](https://arxiv.org/abs/2107.13586) (July 2021) ## Approaches - - [Focused Prefix Tuning for Controllable Text Generation](https://arxiv.org/abs/2306.00369) (June 2023) - - [Exploring Lottery Prompts for Pre-trained Language Models](https://arxiv.org/abs/2305.19500) (May 2023) - - [Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses](https://arxiv.org/abs/2305.19339) (May 2023) - - [Let's Verify Step by Step](https://arxiv.org/abs/2305.20050) (May 2023) - - [Universality and Limitations of Prompt Tuning](https://arxiv.org/abs/2305.18787) (May 2023) - - [MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting](https://arxiv.org/abs/2305.16896) (May 2023) - - [PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents](https://arxiv.org/abs/2305.14564v1) (May 2023) - - [Reasoning with Language Model is Planning with World Model](https://arxiv.org/abs/2305.14992v1) (May 2023) - - [Self-Critique Prompting with Large Language Models for Inductive Instructions](https://arxiv.org/abs/2305.13733) (May 2023) - - [Better Zero-Shot Reasoning with Self-Adaptive Prompting](https://arxiv.org/abs/2305.14106) (May 2023) - - [Hierarchical Prompting Assists Large Language Model on Web Navigation](https://arxiv.org/abs/2305.14257) (May 2023) - - [Interactive Natural Language Processing](https://arxiv.org/abs/2305.13246) (May 2023) - - [Can We Edit Factual Knowledge by In-Context Learning?](https://arxiv.org/abs/2305.12740) (May 2023) - - [In-Context Learning of Large Language Models Explained as Kernel Regression](https://arxiv.org/abs/2305.12766) (May 2023) - - [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](https://arxiv.org/abs/2305.04091v3) (May 2023) - - [Meta-in-context learning in large language models](https://arxiv.org/abs/2305.12907) (May 2023) - - [Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs](https://arxiv.org/abs/2305.11860) (May 2023) - - [Post Hoc Explanations of Language Models Can Improve Language Models](https://arxiv.org/abs/2305.11426) (May 2023) - - [Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt](https://arxiv.org/abs/2305.11186) (May 2023) - - [TreePrompt: Learning to Compose Tree Prompts for Explainable Visual Grounding](https://arxiv.org/abs/2305.11497) (May 2023) - - [TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks](https://arxiv.org/abs/2305.11430) (May 2023) - - [Efficient Prompting via Dynamic In-Context Learning](https://arxiv.org/abs/2305.11170) (May 2023) - - [The Web Can Be Your Oyster for Improving Large Language Models](https://arxiv.org/abs/2305.10998) (May 2023) - - [Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency](https://arxiv.org/abs/2305.10713) (May 2023) - - [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601) (May 2023) - - [ZeroPrompt: Streaming Acoustic Encoders are Zero-Shot Masked LMs](https://arxiv.org/abs/2305.10649) (May 2023) - - [Chain-of-Symbol Prompting Elicits Planning in Large Langauge Models](https://arxiv.org/abs/2305.10276) (May 2023) - - [CooK: Empowering General-Purpose Language Models with Modular and Collaborative Knowledge](https://arxiv.org/abs/2305.09955) (May 2023) - - [What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning](https://arxiv.org/abs/2305.09731) (May 2023) - - [Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling](https://arxiv.org/abs/2305.09993) (May 2023) - - [Satisfiability-Aided Language Models Using Declarative Prompting](https://arxiv.org/abs/2305.09656) (May 2023) - - [Pre-Training to Learn in Context](https://arxiv.org/abs/2305.09137) (May 2023) - - [Boosted Prompt Ensembles for Large Language Models](https://arxiv.org/abs/2304.05970) (April 2023) - - [Global Prompt Cell: A Portable Control Module for Effective Prompt](https://arxiv.org/abs/2304.05642) (April 2023) - - [Why think step-by-step? Reasoning emerges from the locality of experience](https://arxiv.org/abs/2304.03843) (April 2023) - - [Revisiting Automated Prompting: Are We Actually Doing Better?](https://arxiv.org/abs/2304.03609) (April 2023) - - [REFINER: Reasoning Feedback on Intermediate Representations](https://arxiv.org/abs/2304.01904) (April 2023) - - [Reflexion: an autonomous agent with dynamic memory and self-reflection](https://arxiv.org/abs/2303.11366) (March 2023) - - [CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society](https://arxiv.org/abs/2303.17760) (Mar 2023) - - [Self-Refine: Iterative Refinement with Self-Feedback](https://arxiv.org/abs/2303.17651v1) (Mar 2023) - - [kNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference](https://arxiv.org/abs/2303.13824) (Mar 2023) - - [Visual-Language Prompt Tuning with Knowledge-guided Context Optimization](https://arxiv.org/abs/2303.13283) (Mar 2023) - - [Fairness-guided Few-shot Prompting for Large Language Models](https://arxiv.org/abs/2303.13217) (Mar 2023) - - [Context-faithful Prompting for Large Language Models](https://arxiv.org/abs/2303.11315) (Mar 2023) - - [Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning](https://arxiv.org/abs/2303.10475) (Mar 2023) - - [UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation](https://arxiv.org/abs/2303.08518) (Mar 2023) - - [Model-tuning Via Prompts Makes NLP Models Adversarially Robust](https://arxiv.org/abs/2303.07320) (Mar 2023) - - [Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer](https://arxiv.org/abs/2303.03922) (March 2023) - - [CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification](https://arxiv.org/abs/2303.03628) (March 2023) - - [Larger language models do in-context learning differently](https://arxiv.org/abs/2303.03846) (March 2023) - - [OpenICL: An Open-Source Framework for In-context Learning](https://arxiv.org/abs/2303.02913) (March 2023) - - [Dynamic Prompting: A Unified Framework for Prompt Tuning](https://arxiv.org/abs/2303.02909) (March 2023) - - [ART: Automatic multi-step reasoning and tool-use for large language models](https://arxiv.org/abs/2303.09014) (March 2023) - - [Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning](https://arxiv.org/abs/2303.02861) (March 2023) - - [Effectiveness of Data Augmentation for Prefix Tuning with Limited Data](https://arxiv.org/abs/2303.02577) (March 2023) - - [Mixture of Soft Prompts for Controllable Data Generation](https://arxiv.org/abs/2303.01580) (March 2023) - - [Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners](https://arxiv.org/abs/2303.02151) (March 2023) - - [How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks](https://arxiv.org/abs/2303.00293) (March 2023) - - [Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT](https://arxiv.org/pdf/2302.10198.pdf) (Feb 2023) - - [EvoPrompting: Language Models for Code-Level Neural Architecture Search](https://arxiv.org/abs/2302.14838) (Feb 2023) - - [In-Context Instruction Learning](https://arxiv.org/abs/2302.14691) (Feb 2023) - - [Chain of Hindsight Aligns Language Models with Feedback](https://arxiv.org/abs/2302.02676) (Feb 2023) - - [Language Is Not All You Need: Aligning Perception with Language Models](https://arxiv.org/abs/2302.14045) (Feb 2023) - - [Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data](https://arxiv.org/abs/2302.12822) (Feb 2023) - - [Active Prompting with Chain-of-Thought for Large Language Models](https://arxiv.org/abs/2302.12246) (Feb 2023) - - [More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models](https://arxiv.org/abs/2302.12173) (Feb 2023) - - [A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT](https://arxiv.org/abs/2302.11382) (Feb 2023) - - [Guiding Large Language Models via Directional Stimulus Prompting](https://arxiv.org/abs/2302.11520) (Feb 2023) - - [How Does In-Context Learning Help Prompt Tuning?](https://arxiv.org/abs/2302.11521) (Feb 2023) - - [Scalable Prompt Generation for Semi-supervised Learning with Language Models](https://arxiv.org/abs/2302.09236) (Feb 2023) - - [Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints](https://arxiv.org/abs/2302.09185) (Feb 2023) - - [À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting](https://arxiv.org/abs/2302.07994) (Feb 2023) - - [GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks](https://arxiv.org/abs/2302.08043) (Feb 2023) - - [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (Feb 2023) - - [SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains](https://arxiv.org/abs/2302.06868) (Feb 2023) - - [Evaluating the Robustness of Discrete Prompts](https://arxiv.org/abs/2302.05619) (Feb 2023) - - [Compositional Exemplars for In-context Learning](https://arxiv.org/abs/2302.05698) (Feb 2023) - - [Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery](https://arxiv.org/abs/2302.03668) (Feb 2023) - - [Multimodal Chain-of-Thought Reasoning in Language Models](https://arxiv.org/abs/2302.00923) (Feb 2023) - - [Large Language Models Can Be Easily Distracted by Irrelevant Context](https://arxiv.org/abs/2302.00093) (Feb 2023) - - [Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models](https://arxiv.org/abs/2302.00618) (Feb 2023) - - [Progressive Prompts: Continual Learning for Language Models](https://arxiv.org/abs/2301.12314) (Jan 2023) - - [Batch Prompting: Efficient Inference with LLM APIs](https://arxiv.org/abs/2301.08721) (Jan 2023) - - [Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP](https://arxiv.org/abs/2212.14024) (Dec 2022) - - [On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning](https://arxiv.org/abs/2212.08061) (Dec 2022) - - [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073) (Dec 2022) - - [Successive Prompting for Decomposing Complex Questions](https://arxiv.org/abs/2212.04092) (Dec 2022) - - [Large Language Models are reasoners with Self-Verification](https://arxiv.org/abs/2212.09561v1) (Dec 2022) - - [Discovering Language Model Behaviors with Model-Written Evaluations](https://arxiv.org/abs/2212.09251) (Dec 2022) - - [Structured Prompting: Scaling In-Context Learning to 1,000 Examples](https://arxiv.org/abs/2212.06713) (Dec 2022) - - [PAL: Program-aided Language Models](https://arxiv.org/abs/2211.10435) (Nov 2022) - - [Large Language Models Are Human-Level Prompt Engineers](https://arxiv.org/abs/2211.01910) (Nov 2022) - - [Ignore Previous Prompt: Attack Techniques For Language Models](https://arxiv.org/abs/2211.09527) (Nov 2022) - - [Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods](https://arxiv.org/abs/2210.07321) (Nov 2022) - - [Teaching Algorithmic Reasoning via In-context Learning](https://arxiv.org/abs/2211.09066) (Nov 2022) - - [Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference](https://arxiv.org/abs/2211.11875) (Nov 2022) - - [Ask Me Anything: A simple strategy for prompting language models](https://paperswithcode.com/paper/ask-me-anything-a-simple-strategy-for) (Oct 2022) - - [Recitation-Augmented Language Models](https://arxiv.org/abs/2210.01296) (Oct 2022) - - [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) (Oct 2022) - - [Prompting GPT-3 To Be Reliable](https://arxiv.org/abs/2210.09150) (Oct 2022) - - [Decomposed Prompting: A Modular Approach for Solving Complex Tasks](https://arxiv.org/abs/2210.02406) (Oct 2022) - - [Automatic Chain of Thought Prompting in Large Language Models](https://arxiv.org/abs/2210.03493) (Oct 2022) - - [Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought](https://arxiv.org/abs/2210.01240v3) (Oct 2022) - - [Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples](https://arxiv.org/abs/2209.02128) (Sep 2022) - - [Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning](https://arxiv.org/abs/2209.14610) (Sep 2022) - - [Promptagator: Few-shot Dense Retrieval From 8 Examples](https://arxiv.org/abs/2209.11755) (Sep 2022) - - [Atlas: Few-shot Learning with Retrieval Augmented Language Models](https://arxiv.org/abs/2208.03299) (Nov 2022) - - [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/abs/2207.05987) (July 2022) - - [On the Advance of Making Language Models Better Reasoners](https://arxiv.org/abs/2206.02336) (June 2022) - - [Large Language Models are Zero-Shot Reasoners](https://arxiv.org/abs/2205.11916) (May 2022) - - [Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations](https://arxiv.org/abs/2205.11822) (May 2022) - - [MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning](https://arxiv.org/abs/2205.00445) (May 2022) - - [PPT: Pre-trained Prompt Tuning for Few-shot Learning](https://aclanthology.org/2022.acl-long.576/) (Mqy 2022) - - [Toxicity Detection with Generative Prompt-based Inference](https://arxiv.org/abs/2205.12390) (May 2022) - - [Learning to Transfer Prompts for Text Generation](https://arxiv.org/abs/2205.01543) (May 2022) - - [The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning](https://arxiv.org/abs/2205.03401) (May 2022) - - [A Taxonomy of Prompt Modifiers for Text-To-Image Generation](https://arxiv.org/abs/2204.13988) (Apr 2022) - - [PromptChainer: Chaining Large Language Model Prompts through Visual Programming](https://arxiv.org/abs/2203.06566) (Mar 2022) - - [Self-Consistency Improves Chain of Thought Reasoning in Language Models](https://arxiv.org/abs/2203.11171) (March 2022) - - [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155) - - [Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?](https://arxiv.org/abs/2202.12837) (Feb 2022) - - [Chain of Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903) (Jan 2022) - - [Show Your Work: Scratchpads for Intermediate Computation with Language Models](https://arxiv.org/abs/2112.00114) (Nov 2021) - - [AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts](https://arxiv.org/abs/2110.01691) (Oct 2021) - - [Generated Knowledge Prompting for Commonsense Reasoning](https://arxiv.org/abs/2110.08387) (Oct 2021) - - [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207) (Oct 2021) - - [Reframing Instructional Prompts to GPTk's Language](https://arxiv.org/abs/2109.07830) (Sep 2021) - - [Design Guidelines for Prompt Engineering Text-to-Image Generative Models](https://arxiv.org/abs/2109.06977) (Sep 2021) - - [Making Pre-trained Language Models Better Few-shot Learners](https://aclanthology.org/2021.acl-long.295) (Aug 2021) - - [Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity](https://arxiv.org/abs/2104.08786) (April 2021) - - [BERTese: Learning to Speak to BERT](https://aclanthology.org/2021.eacl-main.316) (April 2021) - - [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691) (April 2021) - - [Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm](https://arxiv.org/abs/2102.07350) (Feb 2021) - - [Calibrate Before Use: Improving Few-Shot Performance of Language Models](https://arxiv.org/abs/2102.09690) (Feb 2021) - - [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://arxiv.org/abs/2101.00190) (Jan 2021) - - [Learning to Generate Task-Specific Adapters from Task Description](https://arxiv.org/abs/2101.00420) (Jan 2021) - - [Making Pre-trained Language Models Better Few-shot Learners](https://arxiv.org/abs/2012.15723) (Dec 2020) - - [Learning from Task Descriptions](https://aclanthology.org/2020.emnlp-main.105/) (Nov 2020) - - [AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts](https://arxiv.org/abs/2010.15980) (Oct 2020) - - [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165) (May 2020) - - [How Can We Know What Language Models Know?](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00324/96460/How-Can-We-Know-What-Language-Models-Know) (July 2020) - - [Scaling Laws for Neural Language Models](https://arxiv.org/abs/2001.08361) (Jan 2020) +- [Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding](Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding) (July 2023) +- [Focused Prefix Tuning for Controllable Text Generation](https://arxiv.org/abs/2306.00369) (June 2023) +- [Exploring Lottery Prompts for Pre-trained Language Models](https://arxiv.org/abs/2305.19500) (May 2023) +- [Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses](https://arxiv.org/abs/2305.19339) (May 2023) +- [Let's Verify Step by Step](https://arxiv.org/abs/2305.20050) (May 2023) +- [Universality and Limitations of Prompt Tuning](https://arxiv.org/abs/2305.18787) (May 2023) +- [MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting](https://arxiv.org/abs/2305.16896) (May 2023) +- [PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents](https://arxiv.org/abs/2305.14564v1) (May 2023) +- [Reasoning with Language Model is Planning with World Model](https://arxiv.org/abs/2305.14992v1) (May 2023) +- [Self-Critique Prompting with Large Language Models for Inductive Instructions](https://arxiv.org/abs/2305.13733) (May 2023) +- [Better Zero-Shot Reasoning with Self-Adaptive Prompting](https://arxiv.org/abs/2305.14106) (May 2023) +- [Hierarchical Prompting Assists Large Language Model on Web Navigation](https://arxiv.org/abs/2305.14257) (May 2023) +- [Interactive Natural Language Processing](https://arxiv.org/abs/2305.13246) (May 2023) +- [Can We Edit Factual Knowledge by In-Context Learning?](https://arxiv.org/abs/2305.12740) (May 2023) +- [In-Context Learning of Large Language Models Explained as Kernel Regression](https://arxiv.org/abs/2305.12766) (May 2023) +- [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](https://arxiv.org/abs/2305.04091v3) (May 2023) +- [Meta-in-context learning in large language models](https://arxiv.org/abs/2305.12907) (May 2023) +- [Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs](https://arxiv.org/abs/2305.11860) (May 2023) +- [Post Hoc Explanations of Language Models Can Improve Language Models](https://arxiv.org/abs/2305.11426) (May 2023) +- [Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt](https://arxiv.org/abs/2305.11186) (May 2023) +- [TreePrompt: Learning to Compose Tree Prompts for Explainable Visual Grounding](https://arxiv.org/abs/2305.11497) (May 2023) +- [TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks](https://arxiv.org/abs/2305.11430) (May 2023) +- [Efficient Prompting via Dynamic In-Context Learning](https://arxiv.org/abs/2305.11170) (May 2023) +- [The Web Can Be Your Oyster for Improving Large Language Models](https://arxiv.org/abs/2305.10998) (May 2023) +- [Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency](https://arxiv.org/abs/2305.10713) (May 2023) +- [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601) (May 2023) +- [ZeroPrompt: Streaming Acoustic Encoders are Zero-Shot Masked LMs](https://arxiv.org/abs/2305.10649) (May 2023) +- [Chain-of-Symbol Prompting Elicits Planning in Large Langauge Models](https://arxiv.org/abs/2305.10276) (May 2023) +- [CooK: Empowering General-Purpose Language Models with Modular and Collaborative Knowledge](https://arxiv.org/abs/2305.09955) (May 2023) +- [What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning](https://arxiv.org/abs/2305.09731) (May 2023) +- [Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling](https://arxiv.org/abs/2305.09993) (May 2023) +- [Satisfiability-Aided Language Models Using Declarative Prompting](https://arxiv.org/abs/2305.09656) (May 2023) +- [Pre-Training to Learn in Context](https://arxiv.org/abs/2305.09137) (May 2023) +- [Boosted Prompt Ensembles for Large Language Models](https://arxiv.org/abs/2304.05970) (April 2023) +- [Global Prompt Cell: A Portable Control Module for Effective Prompt](https://arxiv.org/abs/2304.05642) (April 2023) +- [Why think step-by-step? Reasoning emerges from the locality of experience](https://arxiv.org/abs/2304.03843) (April 2023) +- [Revisiting Automated Prompting: Are We Actually Doing Better?](https://arxiv.org/abs/2304.03609) (April 2023) +- [REFINER: Reasoning Feedback on Intermediate Representations](https://arxiv.org/abs/2304.01904) (April 2023) +- [Reflexion: an autonomous agent with dynamic memory and self-reflection](https://arxiv.org/abs/2303.11366) (March 2023) +- [CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society](https://arxiv.org/abs/2303.17760) (March 2023) +- [Self-Refine: Iterative Refinement with Self-Feedback](https://arxiv.org/abs/2303.17651v1) (March 2023) +- [kNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference](https://arxiv.org/abs/2303.13824) (March 2023) +- [Visual-Language Prompt Tuning with Knowledge-guided Context Optimization](https://arxiv.org/abs/2303.13283) (March 2023) +- [Fairness-guided Few-shot Prompting for Large Language Models](https://arxiv.org/abs/2303.13217) (March 2023) +- [Context-faithful Prompting for Large Language Models](https://arxiv.org/abs/2303.11315) (March 2023) +- [Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning](https://arxiv.org/abs/2303.10475) (March 2023) +- [UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation](https://arxiv.org/abs/2303.08518) (March 2023) +- [Model-tuning Via Prompts Makes NLP Models Adversarially Robust](https://arxiv.org/abs/2303.07320) (March 2023) +- [Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer](https://arxiv.org/abs/2303.03922) (March 2023) +- [CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification](https://arxiv.org/abs/2303.03628) (March 2023) +- [Larger language models do in-context learning differently](https://arxiv.org/abs/2303.03846) (March 2023) +- [OpenICL: An Open-Source Framework for In-context Learning](https://arxiv.org/abs/2303.02913) (March 2023) +- [Dynamic Prompting: A Unified Framework for Prompt Tuning](https://arxiv.org/abs/2303.02909) (March 2023) +- [ART: Automatic multi-step reasoning and tool-use for large language models](https://arxiv.org/abs/2303.09014) (March 2023) +- [Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning](https://arxiv.org/abs/2303.02861) (March 2023) +- [Effectiveness of Data Augmentation for Prefix Tuning with Limited Data](https://arxiv.org/abs/2303.02577) (March 2023) +- [Mixture of Soft Prompts for Controllable Data Generation](https://arxiv.org/abs/2303.01580) (March 2023) +- [Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners](https://arxiv.org/abs/2303.02151) (March 2023) +- [How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks](https://arxiv.org/abs/2303.00293) (March 2023) +- [Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT](https://arxiv.org/pdf/2302.10198.pdf) (February 2023) +- [EvoPrompting: Language Models for Code-Level Neural Architecture Search](https://arxiv.org/abs/2302.14838) (February 2023) +- [In-Context Instruction Learning](https://arxiv.org/abs/2302.14691) (February 2023) +- [Chain of Hindsight Aligns Language Models with Feedback](https://arxiv.org/abs/2302.02676) (February 2023) +- [Language Is Not All You Need: Aligning Perception with Language Models](https://arxiv.org/abs/2302.14045) (February 2023) +- [Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data](https://arxiv.org/abs/2302.12822) (February 2023) +- [Active Prompting with Chain-of-Thought for Large Language Models](https://arxiv.org/abs/2302.12246) (February 2023) +- [More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models](https://arxiv.org/abs/2302.12173) (February 2023) +- [A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT](https://arxiv.org/abs/2302.11382) (February 2023) +- [Guiding Large Language Models via Directional Stimulus Prompting](https://arxiv.org/abs/2302.11520) (February 2023) +- [How Does In-Context Learning Help Prompt Tuning?](https://arxiv.org/abs/2302.11521) (February 2023) +- [Scalable Prompt Generation for Semi-supervised Learning with Language Models](https://arxiv.org/abs/2302.09236) (February 2023) +- [Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints](https://arxiv.org/abs/2302.09185) (February 2023) +- [À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting](https://arxiv.org/abs/2302.07994) (February 2023) +- [GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks](https://arxiv.org/abs/2302.08043) (February 2023) +- [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (February 2023) +- [SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains](https://arxiv.org/abs/2302.06868) (February 2023) +- [Evaluating the Robustness of Discrete Prompts](https://arxiv.org/abs/2302.05619) (February 2023) +- [Compositional Exemplars for In-context Learning](https://arxiv.org/abs/2302.05698) (February 2023) +- [Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery](https://arxiv.org/abs/2302.03668) (February 2023) +- [Multimodal Chain-of-Thought Reasoning in Language Models](https://arxiv.org/abs/2302.00923) (February 2023) +- [Large Language Models Can Be Easily Distracted by Irrelevant Context](https://arxiv.org/abs/2302.00093) (February 2023) +- [Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models](https://arxiv.org/abs/2302.00618) (February 2023) +- [Progressive Prompts: Continual Learning for Language Models](https://arxiv.org/abs/2301.12314) (January 2023) +- [Batch Prompting: Efficient Inference with LLM APIs](https://arxiv.org/abs/2301.08721) (January 2023) +- [Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP](https://arxiv.org/abs/2212.14024) (December 2022) +- [On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning](https://arxiv.org/abs/2212.08061) (December 2022) +- [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073) (December 2022) +- [Successive Prompting for Decomposing Complex Questions](https://arxiv.org/abs/2212.04092) (December 2022) +- [Large Language Models are reasoners with Self-Verification](https://arxiv.org/abs/2212.09561v1) (December 2022) +- [Discovering Language Model Behaviors with Model-Written Evaluations](https://arxiv.org/abs/2212.09251) (December 2022) +- [Structured Prompting: Scaling In-Context Learning to 1,000 Examples](https://arxiv.org/abs/2212.06713) (December 2022) +- [PAL: Program-aided Language Models](https://arxiv.org/abs/2211.10435) (November 2022) +- [Large Language Models Are Human-Level Prompt Engineers](https://arxiv.org/abs/2211.01910) (November 2022) +- [Ignore Previous Prompt: Attack Techniques For Language Models](https://arxiv.org/abs/2211.09527) (November 2022) +- [Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods](https://arxiv.org/abs/2210.07321) (November 2022) +- [Teaching Algorithmic Reasoning via In-context Learning](https://arxiv.org/abs/2211.09066) (November 2022) +- [Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference](https://arxiv.org/abs/2211.11875) (November 2022) +- [Ask Me Anything: A simple strategy for prompting language models](https://paperswithcode.com/paper/ask-me-anything-a-simple-strategy-for) (October 2022) +- [Recitation-Augmented Language Models](https://arxiv.org/abs/2210.01296) (October 2022) +- [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) (October 2022) +- [Prompting GPT-3 To Be Reliable](https://arxiv.org/abs/2210.09150) (October 2022) +- [Decomposed Prompting: A Modular Approach for Solving Complex Tasks](https://arxiv.org/abs/2210.02406) (October 2022) +- [Automatic Chain of Thought Prompting in Large Language Models](https://arxiv.org/abs/2210.03493) (October 2022) +- [Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought](https://arxiv.org/abs/2210.01240v3) (October 2022) +- [Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples](https://arxiv.org/abs/2209.02128) (September 2022) +- [Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning](https://arxiv.org/abs/2209.14610) (September 2022) +- [Promptagator: Few-shot Dense Retrieval From 8 Examples](https://arxiv.org/abs/2209.11755) (September 2022) +- [Atlas: Few-shot Learning with Retrieval Augmented Language Models](https://arxiv.org/abs/2208.03299) (November 2022) +- [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/abs/2207.05987) (July 2022) +- [On the Advance of Making Language Models Better Reasoners](https://arxiv.org/abs/2206.02336) (June 2022) +- [Large Language Models are Zero-Shot Reasoners](https://arxiv.org/abs/2205.11916) (May 2022) +- [Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations](https://arxiv.org/abs/2205.11822) (May 2022) +- [MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning](https://arxiv.org/abs/2205.00445) (May 2022) +- [PPT: Pre-trained Prompt Tuning for Few-shot Learning](https://aclanthology.org/2022.acl-long.576/) (Mqy 2022) +- [Toxicity Detection with Generative Prompt-based Inference](https://arxiv.org/abs/2205.12390) (May 2022) +- [Learning to Transfer Prompts for Text Generation](https://arxiv.org/abs/2205.01543) (May 2022) +- [The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning](https://arxiv.org/abs/2205.03401) (May 2022) +- [A Taxonomy of Prompt Modifiers for Text-To-Image Generation](https://arxiv.org/abs/2204.13988) (April 2022) +- [PromptChainer: Chaining Large Language Model Prompts through Visual Programming](https://arxiv.org/abs/2203.06566) (March 2022) +- [Self-Consistency Improves Chain of Thought Reasoning in Language Models](https://arxiv.org/abs/2203.11171) (March 2022) +- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155) +- [Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?](https://arxiv.org/abs/2202.12837) (February 2022) +- [Chain of Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903) (January 2022) +- [Show Your Work: Scratchpads for Intermediate Computation with Language Models](https://arxiv.org/abs/2112.00114) (November 2021) +- [AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts](https://arxiv.org/abs/2110.01691) (October 2021) +- [Generated Knowledge Prompting for Commonsense Reasoning](https://arxiv.org/abs/2110.08387) (October 2021) +- [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207) (October 2021) +- [Reframing Instructional Prompts to GPTk's Language](https://arxiv.org/abs/2109.07830) (September 2021) +- [Design Guidelines for Prompt Engineering Text-to-Image Generative Models](https://arxiv.org/abs/2109.06977) (September 2021) +- [Making Pre-trained Language Models Better Few-shot Learners](https://aclanthology.org/2021.acl-long.295) (August 2021) +- [Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity](https://arxiv.org/abs/2104.08786) (April 2021) +- [BERTese: Learning to Speak to BERT](https://aclanthology.org/2021.eacl-main.316) (April 2021) +- [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691) (April 2021) +- [Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm](https://arxiv.org/abs/2102.07350) (February 2021) +- [Calibrate Before Use: Improving Few-Shot Performance of Language Models](https://arxiv.org/abs/2102.09690) (February 2021) +- [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://arxiv.org/abs/2101.00190) (January 2021) +- [Learning to Generate Task-Specific Adapters from Task Description](https://arxiv.org/abs/2101.00420) (January 2021) +- [Making Pre-trained Language Models Better Few-shot Learners](https://arxiv.org/abs/2012.15723) (December 2020) +- [Learning from Task Descriptions](https://aclanthology.org/2020.emnlp-main.105/) (November 2020) +- [AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts](https://arxiv.org/abs/2010.15980) (October 2020) +- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165) (May 2020) +- [How Can We Know What Language Models Know?](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00324/96460/How-Can-We-Know-What-Language-Models-Know) (July 2020) +- [Scaling Laws for Neural Language Models](https://arxiv.org/abs/2001.08361) (January 2020) ## Applications - - [Interpretable Math Word Problem Solution Generation Via Step-by-step Planning](https://arxiv.org/abs/2306.00784) (June 2023) - - [In-Context Learning User Simulators for Task-Oriented Dialog Systems](https://arxiv.org/abs/2306.00774) (June 2023) - - [SQL-PaLM: Improved Large Language ModelAdaptation for Text-to-SQL](https://arxiv.org/abs/2306.00739) (June 2023) - - [Effective Structured Prompting by Meta-Learning and Representative Verbalizer](https://arxiv.org/abs/2306.00618) (June 2023) - - [Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering](https://arxiv.org/abs/2306.00526) (June 2023) - - [Chain-Of-Thought Prompting Under Streaming Batch: A Case Study](https://arxiv.org/abs/2306.00550) (June 2023) - - [Red Teaming Language Model Detectors with Language Models](https://arxiv.org/abs/2305.19713) (May 2023) - - [Gorilla: Large Language Model Connected with Massive APIs](https://shishirpatil.github.io/gorilla/) (May 2023) - - [Deliberate then Generate: Enhanced Prompting Framework for Text Generation](https://arxiv.org/abs/2305.19835) (May 2023) - - [What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models?](https://arxiv.org/abs/2305.19597) (May 2023) - - [ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning](https://arxiv.org/abs/2305.19426) (May 2023) - - [SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models](https://arxiv.org/abs/2305.19308) (May 2023) - - [Grammar Prompting for Domain-Specific Language Generation with Large Language Models](https://arxiv.org/abs/2305.19234) (May 2023) - - [Mitigating Label Biases for In-context Learning](https://arxiv.org/abs/2305.19148) (May 2023) - - [Short Answer Grading Using One-shot Prompting and Text Similarity Scoring Model](https://arxiv.org/abs/2305.18638) (May 2023) - - [Strategic Reasoning with Language Models](https://arxiv.org/abs/2305.19165) (May 2023) - - [Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs](https://arxiv.org/abs/2305.18869) (May 2023) - - [Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models](https://arxiv.org/abs/2305.18189) (May 2023) - - [Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning](https://arxiv.org/abs/2305.18170) (May 2023) - - [Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A Study on Performance and Controllability in Prompt-Based Methods](https://arxiv.org/abs/2305.18156) (May 2023) - - [NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models](https://arxiv.org/abs/2305.17826) (May 2023) - - [Tab-CoT: Zero-shot Tabular Chain of Thought](https://arxiv.org/abs/2305.17812) (May 2023) - - [Evaluating GPT-3 Generated Explanations for Hateful Content Moderation](https://arxiv.org/abs/2305.17680) (May 2023) - - [Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks](https://arxiv.org/abs/2305.17653) (May 2023) - - [Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning]https://arxiv.org/abs/2305.17373) (May 2023) - - [Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance](https://arxiv.org/abs/2305.17306) (May 2023) - - [Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context Learning](https://arxiv.org/abs/2305.17256) (May 2023) - - [Heterogeneous Value Evaluation for Large Language Models](https://arxiv.org/abs/2305.17147) (May 2023) - - [PromptNER: Prompt Locating and Typing for Named Entity Recognition](https://arxiv.org/abs/2305.17104) (May 2023) - - [Small Language Models Improve Giants by Rewriting Their Outputs](https://arxiv.org/abs/2305.13514v1) (May 2023) - - [On the Planning Abilities of Large Language Models -- A Critical Investigation](https://arxiv.org/abs/2305.15771v1) (May 2023) - - [Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models](https://arxiv.org/abs/2305.16582) (May 2023) - - [PRODIGY: Enabling In-context Learning Over Graphs](https://arxiv.org/abs/2305.12600v1) (May 2023) - - [Large Language Models are Few-Shot Health Learners](https://arxiv.org/abs/2305.15525v1) (May 2023) - - [Role-Play with Large Language Models](https://arxiv.org/abs/2305.16367) (May 2023) - - [Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations](https://arxiv.org/abs/2305.13299v1) (May 2023) - - [Fact-Checking Complex Claims with Program-Guided Reasoning](https://arxiv.org/abs/2305.12744v1) (May 2023) - - [Large Language Models as Tool Makers](https://arxiv.org/abs/2305.17126v1) (May 2023) - - [Iterative Forward Tuning Boosts In-context Learning in Language Models](https://arxiv.org/abs/2305.13016v2) (May 2023) - - [SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks](https://arxiv.org/abs/2305.17390v1) (May 2023) - - [Interactive Natural Language Processing](https://arxiv.org/abs/2305.13246v1) (May 2023) - - [An automatically discovered chain-of-thought prompt generalizes to novel models and datasets](https://arxiv.org/abs/2305.02897v1) (May 2023) - - [Large Language Model Guided Tree-of-Thought](https://arxiv.org/abs/2305.08291v1) (May 2023) - - [Active Retrieval Augmented Generation](https://arxiv.org/abs/2305.06983v1) (May 2023) - - [A PhD Student's Perspective on Research in NLP in the Era of Very Large Language Models](https://arxiv.org/abs/2305.12544v1) (May 2023) - - [Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings](https://arxiv.org/abs/2305.02317v1) (May 2023) - - [Mirages: On Anthropomorphism in Dialogue Systems](https://arxiv.org/abs/2305.09800v1) (May 2023) - - [Model evaluation for extreme risks](https://arxiv.org/abs/2305.15324v1) (May 2023) - - [Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting](https://arxiv.org/abs/2305.04388v1) (May 2023) - - [Cognitive Reframing of Negative Thoughts through Human-Language Model Interaction](https://arxiv.org/abs/2305.02466v1) (May 2023) - - [PromptClass: Weakly-Supervised Text Classification with Prompting Enhanced Noise-Robust Self-Training](https://arxiv.org/abs/2305.13723) (May 2023) - - [Augmented Large Language Models with Parametric Knowledge Guiding](https://arxiv.org/abs/2305.04757v2) (May 2023) - - [Aligning Large Language Models through Synthetic Feedback](https://arxiv.org/abs/2305.13735) (May 2023) - - [Concept-aware Training Improves In-context Learning Ability of Language Models](https://arxiv.org/abs/2305.13775) (May 2023) - - [FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance](https://arxiv.org/abs/2305.05176v1) (May 2023) - - [Enhancing Black-Box Few-Shot Text Classification with Prompt-Based Data Augmentation](https://arxiv.org/abs/2305.13785) (May 2023) - - [Detecting automatically the layout of clinical documents to enhance the performances of downstream natural language processing](https://arxiv.org/abs/2305.13817) (May 2023) - - ["Is the Pope Catholic?" Applying Chain-of-Thought Reasoning to Understanding Conversational Implicatures](https://arxiv.org/abs/2305.13826) (May 2023) - - [Let's Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction](https://arxiv.org/abs/2305.13903) (May 2023) - - [Generating Data for Symbolic Language with Large Language Models](https://arxiv.org/abs/2305.13917) (May 2023) - - [Make a Choice! Knowledge Base Question Answering with In-Context Learning](https://arxiv.org/abs/2305.13972) (May 2023) - - [Improving Language Models via Plug-and-Play Retrieval Feedback](https://arxiv.org/abs/2305.14002) (May 2023) - - [Multi-Granularity Prompts for Topic Shift Detection in Dialogue](https://arxiv.org/abs/2305.14006) (May 2023) - - [The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning](https://arxiv.org/abs/2305.14045) (May 2023) - - [Can Language Models Understand Physical Concepts?](https://arxiv.org/abs/2305.14057) (May 2023) - - [Evaluating Factual Consistency of Summaries with Large Language Models](https://arxiv.org/abs/2305.14069) (May 2023) - - [Dr.ICL: Demonstration-Retrieved In-context Learning](https://arxiv.org/abs/2305.14128) (May 2023) - - [Probing in Context: Toward Building Robust Classifiers via Probing Large Language Models](https://arxiv.org/abs/2305.14171) (May 2023) - - [Skill-Based Few-Shot Selection for In-Context Learning](https://arxiv.org/abs/2305.14210) (May 2023) - - [Exploring Chain-of-Thought Style Prompting for Text-to-SQL](https://arxiv.org/abs/2305.14215) (May 2023) - - [Enhancing Chat Language Models by Scaling High-quality Instructional Conversations](https://arxiv.org/abs/2305.14233) (May 2023) - - [On Learning to Summarize with Large Language Models as References](https://arxiv.org/abs/2305.14239) (May 2023) - - [Learning to Generate Novel Scientific Directions with Contextualized Literature-based Discovery](https://arxiv.org/abs/2305.14259) (May 2023) - - [Active Learning Principles for In-Context Learning with Large Language Models](https://arxiv.org/abs/2305.14264) (May 2023) - - [Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs](https://arxiv.org/abs/2305.14279) (May 2023) - - [Improving Factuality and Reasoning in Language Models through Multiagent Debate](https://arxiv.org/abs/2305.14325) (May 2023) - - [ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on\\ Chat-based Large Language Models](https://arxiv.org/abs/2305.14323) (May 2023) - - [WikiChat: A Few-Shot LLM-Based Chatbot Grounded with Wikipedia](https://arxiv.org/abs/2305.14292) (May 2023) - - [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/abs/2305.14283) (May 2023) - - [Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker](https://arxiv.org/abs/2305.13729) (May 2023) - - [Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method](https://arxiv.org/abs/2305.13412) (May 2023) - - [Small Language Models Improve Giants by Rewriting Their Outputs](https://arxiv.org/abs/2305.13514) (May 2023) - - [Prompting and Evaluating Large Language Models for Proactive Dialogues: Clarification, Target-guided, and Non-collaboration](https://arxiv.org/abs/2305.13626) (May 2023) - - [Prompt-Based Monte-Carlo Tree Search for Goal-Oriented Dialogue Policy Planning](https://arxiv.org/abs/2305.13660) (May 2023) - - [Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment](https://arxiv.org/abs/2305.13669) (May 2023) - - [Making Language Models Better Tool Learners with Execution Feedback](https://arxiv.org/abs/2305.13068) (May 2023) - - [Text-to-SQL Error Correction with Language Models of Code](https://arxiv.org/abs/2305.13073) (May 2023) - - [Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models](https://arxiv.org/abs/2305.13085) (May 2023) - - [SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations](https://arxiv.org/abs/2305.13235) (May 2023) - - ["According to ..." Prompting Language Models Improves Quoting from Pre-Training Data](https://arxiv.org/abs/2305.13252) (May 2023) - - [Prompt-based methods may underestimate large language models' linguistic generalizations](https://arxiv.org/abs/2305.13264) (May 2023) - - [Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases](https://arxiv.org/abs/2305.13269) (May 2023) - - [Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations](https://arxiv.org/abs/2305.13299) (May 2023) - - [Automated Few-shot Classification with Instruction-Finetuned Language Models](https://arxiv.org/abs/2305.12576) (May 2023) - - [Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies](https://arxiv.org/abs/2305.12586) (May 2023) - - [MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction](https://arxiv.org/abs/2305.12627) (May 2023) - - [Learning Interpretable Style Embeddings via Prompting LLMs](https://arxiv.org/abs/2305.12696) (May 2023) - - [Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting](https://arxiv.org/abs/2305.12723) (May 2023) - - [Fact-Checking Complex Claims with Program-Guided Reasoning](https://arxiv.org/abs/2305.12744) (May 2023) - - [A Benchmark on Extremely Weakly Supervised Text Classification: Reconcile Seed Matching and Prompting Approaches](https://arxiv.org/abs/2305.12749) (May 2023) - - [This Prompt is Measuring \: Evaluating Bias Evaluation in Language Models](https://arxiv.org/abs/2305.12757) (May 2023) - - [Enhancing Cross-lingual Natural Language Inference by Soft Prompting with Multilingual Verbalizer](https://arxiv.org/abs/2305.12761) (May 2023) - - [Evaluating Prompt-based Question Answering for Object Prediction in the Open Research Knowledge Graph](https://arxiv.org/abs/2305.12900) (May 2023) - - [Explaining How Transformers Use Context to Build Predictions](https://arxiv.org/abs/2305.12535) (May 2023) - - [PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs](https://arxiv.org/abs/2305.12392) (May 2023) - - [PromptNER: A Prompting Method for Few-shot Named Entity Recognition via k Nearest Neighbor Search](https://arxiv.org/abs/2305.12217) (May 2023) - - [Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning](https://arxiv.org/abs/2305.12295) (May 2023) - - [Enhancing Few-shot NER with Prompt Ordering based Data Augmentation](https://arxiv.org/abs/2305.11791) (May 2023) - - [Chain-of-thought prompting for responding to in-depth dialogue questions with LLM](https://arxiv.org/abs/2305.11792) (May 2023) - - [How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings](https://arxiv.org/abs/2305.11853) (May 2023) - - [Evaluation of medium-large Language Models at zero-shot closed book generative question answering](https://arxiv.org/abs/2305.11991) (May 2023) - - [Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer](https://arxiv.org/abs/2305.12077) (May 2023) - - [Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?](https://arxiv.org/abs/2305.12096) (May 2023) - - [Reasoning Implicit Sentiment with Chain-of-Thought Prompting](https://arxiv.org/abs/2305.11255) (May 2023) - - [Writing your own book: A method for going from closed to open book QA to improve robustness and performance of smaller LLMs](https://arxiv.org/abs/2305.11334) (May 2023) - - [AutoTrial: Prompting Language Models for Clinical Trial Design](https://arxiv.org/abs/2305.11366) (May 2023) - - [CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing](https://arxiv.org/abs/2305.11738) (May 2023) - - [Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning](https://arxiv.org/abs/2305.11759) (May 2023) - - [Prompting with Pseudo-Code Instructions](https://arxiv.org/abs/2305.11790) (May 2023) - - [TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models](https://arxiv.org/abs/2305.11171) (May 2023) - - [Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors](https://arxiv.org/abs/2305.11159) (May 2023) - - [Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting Model](https://arxiv.org/abs/2305.11140) (May 2023) - - [Learning In-context Learning for Named Entity Recognition](https://arxiv.org/abs/2305.11038) (May 2023) - - [Take a Break in the Middle: Investigating Subgoals towards Hierarchical Script Generation](https://arxiv.org/abs/2305.10907) (May 2023) - - [TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse Relation Recognition](https://arxiv.org/abs/2305.10866) (May 2023) - - [Large Language Models can be Guided to Evade AI-Generated Text Detection](https://arxiv.org/abs/2305.10847) (May 2023) - - [Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning](https://arxiv.org/abs/2305.10613) (May 2023) - - [Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot Task Generalization](https://arxiv.org/abs/2305.11095) (May 2023) - - [Think Outside the Code: Brainstorming Boosts Large Language Models in Code Generation](https://arxiv.org/abs/2305.10679) (May 2023) - - [Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback](https://arxiv.org/abs/2305.10142) (May 2023) - - [ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing](https://arxiv.org/abs/2305.09770) (May 2023) - - [StructGPT: A General Framework for Large Language Model to Reason over Structured Data](https://arxiv.org/abs/2305.09645) (May 2023) - - [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) (May 2023) - - [Large Language Models are Built-in Autoregressive Search Engines](https://arxiv.org/abs/2305.09612) (May 2023) - - [MsPrompt: Multi-step Prompt Learning for Debiasing Few-shot Event Detection](https://arxiv.org/abs/2305.09335) (May 2023) - - [Exploring the Impact of Layer Normalization for Zero-shot Neural Machine Translation](https://arxiv.org/abs/2305.09312) (May 2023) - - [SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting](https://arxiv.org/abs/2305.09067) (May 2023) - - [Multi-modal Visual Understanding with Prompts for Semantic Information Disentanglement of Image](https://arxiv.org/abs/2305.09333) (May 2023) - - [Soft Prompt Decoding for Multilingual Dense Retrieval](https://arxiv.org/abs/2305.09025) (May 2023) - - [PaLM 2 Technical Report](https://ai.google/static/documents/palm2techreport.pdf) (May 2023) - - [Are LLMs All You Need for Task-Oriented Dialogue?](https://arxiv.org/abs/2304.06556) (April 2023) - - [HiPrompt: Few-Shot Biomedical Knowledge Fusion via Hierarchy-Oriented Prompting](https://arxiv.org/abs/2304.05973) (April 2023) - - [Approximating Human Evaluation of Social Chatbots with Prompting](https://arxiv.org/abs/2304.05253) (April 2023) - - [Automated Reading Passage Generation with OpenAI's Large Language Model](https://arxiv.org/abs/2304.04616) (April 2023) - - [WebBrain: Learning to Generate Factually Correct Articles for Queries by Grounding on Large Web Corpus](https://arxiv.org/abs/2304.04358) (April 2023) - - [Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition](https://arxiv.org/abs/2304.04704) (April 2023) - - [GPT detectors are biased against non-native English writers](https://arxiv.org/abs/2304.02819) (April 2023) - - [Zero-Shot Next-Item Recommendation using Large Pretrained Language Models](https://arxiv.org/abs/2304.03153) (April 2023) - - [Large Language Models as Master Key: Unlocking the Secrets of Materials Science with GPT](https://arxiv.org/abs/2304.02213) (April 2023) - - [Efficiently Aligned Cross-Lingual Transfer Learning for Conversational Tasks using Prompt-Tuning](https://arxiv.org/abs/2304.01295) (April 2023) - - [Better Language Models of Code through Self-Improvement](https://arxiv.org/abs/2304.01228) (April 2023) - - [PromptORE -- A Novel Approach Towards Fully Unsupervised Relation Extraction](https://arxiv.org/abs/2304.01209) (April 2023) - - [Assessing Language Model Deployment with Risk Cards]() (April 2023) - - [Enhancing Large Language Models with Climate Resources](https://arxiv.org/abs/2304.00116) (March 2023) - - [BloombergGPT: A Large Language Model for Finance](https://arxiv.org/abs/2303.17564) (March 2023) - - [Medical Intervention Duration Estimation Using Language-enhanced Transformer Encoder with Medical Prompts](https://arxiv.org/abs/2303.17408) (March 2023) - - [Soft-prompt tuning to predict lung cancer using primary care free-text Dutch medical notes](https://arxiv.org/abs/2303.15846) (March 2023) - - [TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs](https://arxiv.org/abs/2303.16434) (March 2023) - - [Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning](https://arxiv.org/abs/2303.16445) (March 2023) - - [Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese Machine Translation: A Case Study on Attributive Clauses](https://arxiv.org/abs/2303.15587) (March 2023) - - [Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning](https://arxiv.org/abs/2303.14375) (March 2023) - - [Debiasing Scores and Prompts of 2D Diffusion for Robust Text-to-3D Generation](https://arxiv.org/abs/2303.15413) (March 2023) - - [Zero-shot Model Diagnosis](https://arxiv.org/abs/2303.15441#) (March 2023) - - [Prompting Large Language Models to Generate Code-Mixed Texts: The Case of South East Asian Languages](https://arxiv.org/abs/2303.13592) (March 2023) - - [SPeC: A Soft Prompt-Based Calibration on Mitigating Performance Variability in Clinical Notes Summarization](https://arxiv.org/abs/2303.13035) (March 2023) - - [Large Language Models and Simple, Stupid Bugs](https://arxiv.org/abs/2303.11455) (March 2023) - - [Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses?](https://arxiv.org/abs/2303.09325) (Mar 2023) - - [SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models](https://arxiv.org/abs/2303.08896) (Mar 2023) - - [Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification](https://arxiv.org/abs/2303.07142) (March 2023) - - [ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction](https://arxiv.org/abs/2303.05063) (March 2023) - - [MathPrompter: Mathematical Reasoning using Large Language Models](https://arxiv.org/abs/2303.05398) (March 2023) - - [Prompt-Based Learning for Thread Structure Prediction in Cybersecurity Forums](https://arxiv.org/abs/2303.05400) (March 2023) - - [Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting](https://arxiv.org/abs/2303.03199) (March 2023) - - [Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering](https://arxiv.org/abs/2303.01903) (March 2023) - - [Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis](https://arxiv.org/abs/2303.00815) (March 2023) - - [SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks](https://arxiv.org/abs/2303.00733) (March 2023) - - [Goal Driven Discovery of Distributional Differences via Language Descriptions](https://arxiv.org/abs/2302.14233) (Feb 2023) - - [Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models](https://arxiv.org/abs/2302.13439) (Feb 2023) - - [TabGenie: A Toolkit for Table-to-Text Generation](https://arxiv.org/abs/2302.14169) (Feb 2023) - - [SGL-PT: A Strong Graph Learner with Graph Prompt Tuning](https://arxiv.org/abs/2302.12449) (Feb 2023) - - [Few-Shot Table-to-Text Generation with Prompt-based Adapter](https://arxiv.org/abs/2302.12468) (Feb 2023) - - [Language Models Are Few-shot Learners for Prognostic Prediction](https://arxiv.org/abs/2302.12692) (Feb 2023) - - [STA: Self-controlled Text Augmentation for Improving Text Classifications](https://arxiv.org/abs/2302.12784) (Feb 2023) - - [Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback](https://arxiv.org/abs/2302.12813) (Feb 2023) - - [How Generative AI models such as ChatGPT can be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study](https://arxiv.org/abs/2302.10916) (Feb 2023) - - [Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales](https://arxiv.org/abs/2302.08961) (Feb 2023) - - [LabelPrompt: Effective Prompt-based Learning for Relation Classification](https://arxiv.org/abs/2302.08068) (Feb 2023) - - [Language Model Crossover: Variation through Few-Shot Prompting](https://arxiv.org/abs/2302.09236) (Feb 2023) - - [Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition](https://arxiv.org/abs/2302.08102) (Feb 2023) - - [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (Feb 2023) - - [Prompting for Multimodal Hateful Meme Classification](https://arxiv.org/abs/2302.04156) (Feb 2023) - - [PLACES: Prompting Language Models for Social Conversation Synthesis](https://arxiv.org/abs/2302.03269) (Feb 2023) - - [Toolformer: Language Models Can Teach Themselves to Use Tools](https://arxiv.org/abs/2302.04761) (Feb 2023) - - [Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation](https://arxiv.org/abs/2302.01441) (Feb 2023) - - [Crawling the Internal Knowledge-Base of Language Models](https://arxiv.org/abs/2301.12810) (Jan 2023) - - [Legal Prompt Engineering for Multilingual Legal Judgement Prediction](https://arxiv.org/abs/2212.02199) (Dec 2022) - - [Investigating Prompt Engineering in Diffusion Models](https://arxiv.org/abs/2211.15462) (Nov 2022) - - [Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering](https://arxiv.org/abs/2209.09513v2) (Sep 2022) - - [Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language](https://arxiv.org/abs/2210.15157) (Oct 2022) - - [Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic?](https://arxiv.org/abs/2210.14699) (Oct 2022) - - [Plot Writing From Scratch Pre-Trained Language Models](https://aclanthology.org/2022.inlg-main.5) (July 2022) - - [Survey of Hallucination in Natural Language Generation](https://arxiv.org/abs/2202.03629) (Feb 2022) +- [You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content](https://arxiv.org/abs/2308.05596) (August 2023) +- [LLM As DBA](https://arxiv.org/abs/2308.05481) (August 2023) +- [Interpretable Math Word Problem Solution Generation Via Step-by-step Planning](https://arxiv.org/abs/2306.00784) (June 2023) +- [In-Context Learning User Simulators for Task-Oriented Dialog Systems](https://arxiv.org/abs/2306.00774) (June 2023) +- [SQL-PaLM: Improved Large Language ModelAdaptation for Text-to-SQL](https://arxiv.org/abs/2306.00739) (June 2023) +- [Effective Structured Prompting by Meta-Learning and Representative Verbalizer](https://arxiv.org/abs/2306.00618) (June 2023) +- [Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering](https://arxiv.org/abs/2306.00526) (June 2023) +- [Chain-Of-Thought Prompting Under Streaming Batch: A Case Study](https://arxiv.org/abs/2306.00550) (June 2023) +- [Red Teaming Language Model Detectors with Language Models](https://arxiv.org/abs/2305.19713) (May 2023) +- [Gorilla: Large Language Model Connected with Massive APIs](https://shishirpatil.github.io/gorilla/) (May 2023) +- [Deliberate then Generate: Enhanced Prompting Framework for Text Generation](https://arxiv.org/abs/2305.19835) (May 2023) +- [What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models?](https://arxiv.org/abs/2305.19597) (May 2023) +- [ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning](https://arxiv.org/abs/2305.19426) (May 2023) +- [SheetCopilot: Bringing Software Productivity to the Next Level through Large Language Models](https://arxiv.org/abs/2305.19308) (May 2023) +- [Grammar Prompting for Domain-Specific Language Generation with Large Language Models](https://arxiv.org/abs/2305.19234) (May 2023) +- [Mitigating Label Biases for In-context Learning](https://arxiv.org/abs/2305.19148) (May 2023) +- [Short Answer Grading Using One-shot Prompting and Text Similarity Scoring Model](https://arxiv.org/abs/2305.18638) (May 2023) +- [Strategic Reasoning with Language Models](https://arxiv.org/abs/2305.19165) (May 2023) +- [Dissecting Chain-of-Thought: A Study on Compositional In-Context Learning of MLPs](https://arxiv.org/abs/2305.18869) (May 2023) +- [Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models](https://arxiv.org/abs/2305.18189) (May 2023) +- [Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning](https://arxiv.org/abs/2305.18170) (May 2023) +- [Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A Study on Performance and Controllability in Prompt-Based Methods](https://arxiv.org/abs/2305.18156) (May 2023) +- [NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models](https://arxiv.org/abs/2305.17826) (May 2023) +- [Tab-CoT: Zero-shot Tabular Chain of Thought](https://arxiv.org/abs/2305.17812) (May 2023) +- [Evaluating GPT-3 Generated Explanations for Hateful Content Moderation](https://arxiv.org/abs/2305.17680) (May 2023) +- [Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks](https://arxiv.org/abs/2305.17653) (May 2023) +- [Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning]https://arxiv.org/abs/2305.17373) (May 2023) +- [Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance](https://arxiv.org/abs/2305.17306) (May 2023) +- [Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context Learning](https://arxiv.org/abs/2305.17256) (May 2023) +- [Heterogeneous Value Evaluation for Large Language Models](https://arxiv.org/abs/2305.17147) (May 2023) +- [PromptNER: Prompt Locating and Typing for Named Entity Recognition](https://arxiv.org/abs/2305.17104) (May 2023) +- [Small Language Models Improve Giants by Rewriting Their Outputs](https://arxiv.org/abs/2305.13514v1) (May 2023) +- [On the Planning Abilities of Large Language Models -- A Critical Investigation](https://arxiv.org/abs/2305.15771v1) (May 2023) +- [Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models](https://arxiv.org/abs/2305.16582) (May 2023) +- [PRODIGY: Enabling In-context Learning Over Graphs](https://arxiv.org/abs/2305.12600v1) (May 2023) +- [Large Language Models are Few-Shot Health Learners](https://arxiv.org/abs/2305.15525v1) (May 2023) +- [Role-Play with Large Language Models](https://arxiv.org/abs/2305.16367) (May 2023) +- [Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations](https://arxiv.org/abs/2305.13299v1) (May 2023) +- [Fact-Checking Complex Claims with Program-Guided Reasoning](https://arxiv.org/abs/2305.12744v1) (May 2023) +- [Large Language Models as Tool Makers](https://arxiv.org/abs/2305.17126v1) (May 2023) +- [Iterative Forward Tuning Boosts In-context Learning in Language Models](https://arxiv.org/abs/2305.13016v2) (May 2023) +- [SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks](https://arxiv.org/abs/2305.17390v1) (May 2023) +- [Interactive Natural Language Processing](https://arxiv.org/abs/2305.13246v1) (May 2023) +- [An automatically discovered chain-of-thought prompt generalizes to novel models and datasets](https://arxiv.org/abs/2305.02897v1) (May 2023) +- [Large Language Model Guided Tree-of-Thought](https://arxiv.org/abs/2305.08291v1) (May 2023) +- [Active Retrieval Augmented Generation](https://arxiv.org/abs/2305.06983v1) (May 2023) +- [A PhD Student's Perspective on Research in NLP in the Era of Very Large Language Models](https://arxiv.org/abs/2305.12544v1) (May 2023) +- [Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings](https://arxiv.org/abs/2305.02317v1) (May 2023) +- [Mirages: On Anthropomorphism in Dialogue Systems](https://arxiv.org/abs/2305.09800v1) (May 2023) +- [Model evaluation for extreme risks](https://arxiv.org/abs/2305.15324v1) (May 2023) +- [Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting](https://arxiv.org/abs/2305.04388v1) (May 2023) +- [Cognitive Reframing of Negative Thoughts through Human-Language Model Interaction](https://arxiv.org/abs/2305.02466v1) (May 2023) +- [PromptClass: Weakly-Supervised Text Classification with Prompting Enhanced Noise-Robust Self-Training](https://arxiv.org/abs/2305.13723) (May 2023) +- [Augmented Large Language Models with Parametric Knowledge Guiding](https://arxiv.org/abs/2305.04757v2) (May 2023) +- [Aligning Large Language Models through Synthetic Feedback](https://arxiv.org/abs/2305.13735) (May 2023) +- [Concept-aware Training Improves In-context Learning Ability of Language Models](https://arxiv.org/abs/2305.13775) (May 2023) +- [FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance](https://arxiv.org/abs/2305.05176v1) (May 2023) +- [Enhancing Black-Box Few-Shot Text Classification with Prompt-Based Data Augmentation](https://arxiv.org/abs/2305.13785) (May 2023) +- [Detecting automatically the layout of clinical documents to enhance the performances of downstream natural language processing](https://arxiv.org/abs/2305.13817) (May 2023) +- ["Is the Pope Catholic?" Applying Chain-of-Thought Reasoning to Understanding Conversational Implicatures](https://arxiv.org/abs/2305.13826) (May 2023) +- [Let's Think Frame by Frame: Evaluating Video Chain of Thought with Video Infilling and Prediction](https://arxiv.org/abs/2305.13903) (May 2023) +- [Generating Data for Symbolic Language with Large Language Models](https://arxiv.org/abs/2305.13917) (May 2023) +- [Make a Choice! Knowledge Base Question Answering with In-Context Learning](https://arxiv.org/abs/2305.13972) (May 2023) +- [Improving Language Models via Plug-and-Play Retrieval Feedback](https://arxiv.org/abs/2305.14002) (May 2023) +- [Multi-Granularity Prompts for Topic Shift Detection in Dialogue](https://arxiv.org/abs/2305.14006) (May 2023) +- [The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning](https://arxiv.org/abs/2305.14045) (May 2023) +- [Can Language Models Understand Physical Concepts?](https://arxiv.org/abs/2305.14057) (May 2023) +- [Evaluating Factual Consistency of Summaries with Large Language Models](https://arxiv.org/abs/2305.14069) (May 2023) +- [Dr.ICL: Demonstration-Retrieved In-context Learning](https://arxiv.org/abs/2305.14128) (May 2023) +- [Probing in Context: Toward Building Robust Classifiers via Probing Large Language Models](https://arxiv.org/abs/2305.14171) (May 2023) +- [Skill-Based Few-Shot Selection for In-Context Learning](https://arxiv.org/abs/2305.14210) (May 2023) +- [Exploring Chain-of-Thought Style Prompting for Text-to-SQL](https://arxiv.org/abs/2305.14215) (May 2023) +- [Enhancing Chat Language Models by Scaling High-quality Instructional Conversations](https://arxiv.org/abs/2305.14233) (May 2023) +- [On Learning to Summarize with Large Language Models as References](https://arxiv.org/abs/2305.14239) (May 2023) +- [Learning to Generate Novel Scientific Directions with Contextualized Literature-based Discovery](https://arxiv.org/abs/2305.14259) (May 2023) +- [Active Learning Principles for In-Context Learning with Large Language Models](https://arxiv.org/abs/2305.14264) (May 2023) +- [Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs](https://arxiv.org/abs/2305.14279) (May 2023) +- [Improving Factuality and Reasoning in Language Models through Multiagent Debate](https://arxiv.org/abs/2305.14325) (May 2023) +- [ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on\\ Chat-based Large Language Models](https://arxiv.org/abs/2305.14323) (May 2023) +- [WikiChat: A Few-Shot LLM-Based Chatbot Grounded with Wikipedia](https://arxiv.org/abs/2305.14292) (May 2023) +- [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/abs/2305.14283) (May 2023) +- [Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker](https://arxiv.org/abs/2305.13729) (May 2023) +- [Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method](https://arxiv.org/abs/2305.13412) (May 2023) +- [Small Language Models Improve Giants by Rewriting Their Outputs](https://arxiv.org/abs/2305.13514) (May 2023) +- [Prompting and Evaluating Large Language Models for Proactive Dialogues: Clarification, Target-guided, and Non-collaboration](https://arxiv.org/abs/2305.13626) (May 2023) +- [Prompt-Based Monte-Carlo Tree Search for Goal-Oriented Dialogue Policy Planning](https://arxiv.org/abs/2305.13660) (May 2023) +- [Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment](https://arxiv.org/abs/2305.13669) (May 2023) +- [Making Language Models Better Tool Learners with Execution Feedback](https://arxiv.org/abs/2305.13068) (May 2023) +- [Text-to-SQL Error Correction with Language Models of Code](https://arxiv.org/abs/2305.13073) (May 2023) +- [Decomposed Prompting for Machine Translation Between Related Languages using Large Language Models](https://arxiv.org/abs/2305.13085) (May 2023) +- [SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations](https://arxiv.org/abs/2305.13235) (May 2023) +- ["According to ..." Prompting Language Models Improves Quoting from Pre-Training Data](https://arxiv.org/abs/2305.13252) (May 2023) +- [Prompt-based methods may underestimate large language models' linguistic generalizations](https://arxiv.org/abs/2305.13264) (May 2023) +- [Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases](https://arxiv.org/abs/2305.13269) (May 2023) +- [Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations](https://arxiv.org/abs/2305.13299) (May 2023) +- [Automated Few-shot Classification with Instruction-Finetuned Language Models](https://arxiv.org/abs/2305.12576) (May 2023) +- [Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies](https://arxiv.org/abs/2305.12586) (May 2023) +- [MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction](https://arxiv.org/abs/2305.12627) (May 2023) +- [Learning Interpretable Style Embeddings via Prompting LLMs](https://arxiv.org/abs/2305.12696) (May 2023) +- [Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting](https://arxiv.org/abs/2305.12723) (May 2023) +- [Fact-Checking Complex Claims with Program-Guided Reasoning](https://arxiv.org/abs/2305.12744) (May 2023) +- [A Benchmark on Extremely Weakly Supervised Text Classification: Reconcile Seed Matching and Prompting Approaches](https://arxiv.org/abs/2305.12749) (May 2023) +- [This Prompt is Measuring \: Evaluating Bias Evaluation in Language Models](https://arxiv.org/abs/2305.12757) (May 2023) +- [Enhancing Cross-lingual Natural Language Inference by Soft Prompting with Multilingual Verbalizer](https://arxiv.org/abs/2305.12761) (May 2023) +- [Evaluating Prompt-based Question Answering for Object Prediction in the Open Research Knowledge Graph](https://arxiv.org/abs/2305.12900) (May 2023) +- [Explaining How Transformers Use Context to Build Predictions](https://arxiv.org/abs/2305.12535) (May 2023) +- [PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs](https://arxiv.org/abs/2305.12392) (May 2023) +- [PromptNER: A Prompting Method for Few-shot Named Entity Recognition via k Nearest Neighbor Search](https://arxiv.org/abs/2305.12217) (May 2023) +- [Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning](https://arxiv.org/abs/2305.12295) (May 2023) +- [Enhancing Few-shot NER with Prompt Ordering based Data Augmentation](https://arxiv.org/abs/2305.11791) (May 2023) +- [Chain-of-thought prompting for responding to in-depth dialogue questions with LLM](https://arxiv.org/abs/2305.11792) (May 2023) +- [How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings](https://arxiv.org/abs/2305.11853) (May 2023) +- [Evaluation of medium-large Language Models at zero-shot closed book generative question answering](https://arxiv.org/abs/2305.11991) (May 2023) +- [Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer](https://arxiv.org/abs/2305.12077) (May 2023) +- [Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?](https://arxiv.org/abs/2305.12096) (May 2023) +- [Reasoning Implicit Sentiment with Chain-of-Thought Prompting](https://arxiv.org/abs/2305.11255) (May 2023) +- [Writing your own book: A method for going from closed to open book QA to improve robustness and performance of smaller LLMs](https://arxiv.org/abs/2305.11334) (May 2023) +- [AutoTrial: Prompting Language Models for Clinical Trial Design](https://arxiv.org/abs/2305.11366) (May 2023) +- [CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing](https://arxiv.org/abs/2305.11738) (May 2023) +- [Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning](https://arxiv.org/abs/2305.11759) (May 2023) +- [Prompting with Pseudo-Code Instructions](https://arxiv.org/abs/2305.11790) (May 2023) +- [TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models](https://arxiv.org/abs/2305.11171) (May 2023) +- [Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors](https://arxiv.org/abs/2305.11159) (May 2023) +- [Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting Model](https://arxiv.org/abs/2305.11140) (May 2023) +- [Learning In-context Learning for Named Entity Recognition](https://arxiv.org/abs/2305.11038) (May 2023) +- [Take a Break in the Middle: Investigating Subgoals towards Hierarchical Script Generation](https://arxiv.org/abs/2305.10907) (May 2023) +- [TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse Relation Recognition](https://arxiv.org/abs/2305.10866) (May 2023) +- [Large Language Models can be Guided to Evade AI-Generated Text Detection](https://arxiv.org/abs/2305.10847) (May 2023) +- [Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning](https://arxiv.org/abs/2305.10613) (May 2023) +- [Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot Task Generalization](https://arxiv.org/abs/2305.11095) (May 2023) +- [Think Outside the Code: Brainstorming Boosts Large Language Models in Code Generation](https://arxiv.org/abs/2305.10679) (May 2023) +- [Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback](https://arxiv.org/abs/2305.10142) (May 2023) +- [ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing](https://arxiv.org/abs/2305.09770) (May 2023) +- [StructGPT: A General Framework for Large Language Model to Reason over Structured Data](https://arxiv.org/abs/2305.09645) (May 2023) +- [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) (May 2023) +- [Large Language Models are Built-in Autoregressive Search Engines](https://arxiv.org/abs/2305.09612) (May 2023) +- [MsPrompt: Multi-step Prompt Learning for Debiasing Few-shot Event Detection](https://arxiv.org/abs/2305.09335) (May 2023) +- [Exploring the Impact of Layer Normalization for Zero-shot Neural Machine Translation](https://arxiv.org/abs/2305.09312) (May 2023) +- [SGP-TOD: Building Task Bots Effortlessly via Schema-Guided LLM Prompting](https://arxiv.org/abs/2305.09067) (May 2023) +- [Multi-modal Visual Understanding with Prompts for Semantic Information Disentanglement of Image](https://arxiv.org/abs/2305.09333) (May 2023) +- [Soft Prompt Decoding for Multilingual Dense Retrieval](https://arxiv.org/abs/2305.09025) (May 2023) +- [PaLM 2 Technical Report](https://ai.google/static/documents/palm2techreport.pdf) (May 2023) +- [Are LLMs All You Need for Task-Oriented Dialogue?](https://arxiv.org/abs/2304.06556) (April 2023) +- [HiPrompt: Few-Shot Biomedical Knowledge Fusion via Hierarchy-Oriented Prompting](https://arxiv.org/abs/2304.05973) (April 2023) +- [Approximating Human Evaluation of Social Chatbots with Prompting](https://arxiv.org/abs/2304.05253) (April 2023) +- [Automated Reading Passage Generation with OpenAI's Large Language Model](https://arxiv.org/abs/2304.04616) (April 2023) +- [WebBrain: Learning to Generate Factually Correct Articles for Queries by Grounding on Large Web Corpus](https://arxiv.org/abs/2304.04358) (April 2023) +- [Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition](https://arxiv.org/abs/2304.04704) (April 2023) +- [GPT detectors are biased against non-native English writers](https://arxiv.org/abs/2304.02819) (April 2023) +- [Zero-Shot Next-Item Recommendation using Large Pretrained Language Models](https://arxiv.org/abs/2304.03153) (April 2023) +- [Large Language Models as Master Key: Unlocking the Secrets of Materials Science with GPT](https://arxiv.org/abs/2304.02213) (April 2023) +- [Efficiently Aligned Cross-Lingual Transfer Learning for Conversational Tasks using Prompt-Tuning](https://arxiv.org/abs/2304.01295) (April 2023) +- [Better Language Models of Code through Self-Improvement](https://arxiv.org/abs/2304.01228) (April 2023) +- [PromptORE -- A Novel Approach Towards Fully Unsupervised Relation Extraction](https://arxiv.org/abs/2304.01209) (April 2023) +- [Assessing Language Model Deployment with Risk Cards]() (April 2023) +- [Enhancing Large Language Models with Climate Resources](https://arxiv.org/abs/2304.00116) (March 2023) +- [BloombergGPT: A Large Language Model for Finance](https://arxiv.org/abs/2303.17564) (March 2023) +- [Medical Intervention Duration Estimation Using Language-enhanced Transformer Encoder with Medical Prompts](https://arxiv.org/abs/2303.17408) (March 2023) +- [Soft-prompt tuning to predict lung cancer using primary care free-text Dutch medical notes](https://arxiv.org/abs/2303.15846) (March 2023) +- [TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs](https://arxiv.org/abs/2303.16434) (March 2023) +- [Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning](https://arxiv.org/abs/2303.16445) (March 2023) +- [Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese Machine Translation: A Case Study on Attributive Clauses](https://arxiv.org/abs/2303.15587) (March 2023) +- [Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning](https://arxiv.org/abs/2303.14375) (March 2023) +- [Debiasing Scores and Prompts of 2D Diffusion for Robust Text-to-3D Generation](https://arxiv.org/abs/2303.15413) (March 2023) +- [Zero-shot Model Diagnosis](https://arxiv.org/abs/2303.15441#) (March 2023) +- [Prompting Large Language Models to Generate Code-Mixed Texts: The Case of South East Asian Languages](https://arxiv.org/abs/2303.13592) (March 2023) +- [SPeC: A Soft Prompt-Based Calibration on Mitigating Performance Variability in Clinical Notes Summarization](https://arxiv.org/abs/2303.13035) (March 2023) +- [Large Language Models and Simple, Stupid Bugs](https://arxiv.org/abs/2303.11455) (March 2023) +- [Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses?](https://arxiv.org/abs/2303.09325) (March 2023) +- [SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models](https://arxiv.org/abs/2303.08896) (March 2023) +- [Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification](https://arxiv.org/abs/2303.07142) (March 2023) +- [ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction](https://arxiv.org/abs/2303.05063) (March 2023) +- [MathPrompter: Mathematical Reasoning using Large Language Models](https://arxiv.org/abs/2303.05398) (March 2023) +- [Prompt-Based Learning for Thread Structure Prediction in Cybersecurity Forums](https://arxiv.org/abs/2303.05400) (March 2023) +- [Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting](https://arxiv.org/abs/2303.03199) (March 2023) +- [Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering](https://arxiv.org/abs/2303.01903) (March 2023) +- [Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis](https://arxiv.org/abs/2303.00815) (March 2023) +- [SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks](https://arxiv.org/abs/2303.00733) (March 2023) +- [Goal Driven Discovery of Distributional Differences via Language Descriptions](https://arxiv.org/abs/2302.14233) (February 2023) +- [Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models](https://arxiv.org/abs/2302.13439) (February 2023) +- [TabGenie: A Toolkit for Table-to-Text Generation](https://arxiv.org/abs/2302.14169) (February 2023) +- [SGL-PT: A Strong Graph Learner with Graph Prompt Tuning](https://arxiv.org/abs/2302.12449) (February 2023) +- [Few-Shot Table-to-Text Generation with Prompt-based Adapter](https://arxiv.org/abs/2302.12468) (February 2023) +- [Language Models Are Few-shot Learners for Prognostic Prediction](https://arxiv.org/abs/2302.12692) (February 2023) +- [STA: Self-controlled Text Augmentation for Improving Text Classifications](https://arxiv.org/abs/2302.12784) (February 2023) +- [Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback](https://arxiv.org/abs/2302.12813) (February 2023) +- [How Generative AI models such as ChatGPT can be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study](https://arxiv.org/abs/2302.10916) (February 2023) +- [Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales](https://arxiv.org/abs/2302.08961) (February 2023) +- [LabelPrompt: Effective Prompt-based Learning for Relation Classification](https://arxiv.org/abs/2302.08068) (February 2023) +- [Language Model Crossover: Variation through Few-Shot Prompting](https://arxiv.org/abs/2302.09236) (February 2023) +- [Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition](https://arxiv.org/abs/2302.08102) (February 2023) +- [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (February 2023) +- [Prompting for Multimodal Hateful Meme Classification](https://arxiv.org/abs/2302.04156) (February 2023) +- [PLACES: Prompting Language Models for Social Conversation Synthesis](https://arxiv.org/abs/2302.03269) (February 2023) +- [Toolformer: Language Models Can Teach Themselves to Use Tools](https://arxiv.org/abs/2302.04761) (February 2023) +- [Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation](https://arxiv.org/abs/2302.01441) (February 2023) +- [Crawling the Internal Knowledge-Base of Language Models](https://arxiv.org/abs/2301.12810) (January 2023) +- [Legal Prompt Engineering for Multilingual Legal Judgement Prediction](https://arxiv.org/abs/2212.02199) (December 2022) +- [Investigating Prompt Engineering in Diffusion Models](https://arxiv.org/abs/2211.15462) (November 2022) +- [Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering](https://arxiv.org/abs/2209.09513v2) (September 2022) +- [Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language](https://arxiv.org/abs/2210.15157) (October 2022) +- [Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic?](https://arxiv.org/abs/2210.14699) (October 2022) +- [Plot Writing From Scratch Pre-Trained Language Models](https://aclanthology.org/2022.inlg-main.5) (July 2022) +- [Survey of Hallucination in Natural Language Generation](https://arxiv.org/abs/2202.03629) (February 2022) ## Collections - - [Chain-of-Thought Papers](https://github.com/Timothyxxx/Chain-of-ThoughtsPapers) - - [Papers with Code](https://paperswithcode.com/task/prompt-engineering) - - [Prompt Papers](https://github.com/thunlp/PromptPapers#papers) +- [Chain-of-Thought Papers](https://github.com/Timothyxxx/Chain-of-ThoughtsPapers) +- [Papers with Code](https://paperswithcode.com/task/prompt-engineering) +- [Prompt Papers](https://github.com/thunlp/PromptPapers#papers) diff --git a/pages/tools.en.mdx b/pages/tools.en.mdx index 9a8a3db..df20aa9 100644 --- a/pages/tools.en.mdx +++ b/pages/tools.en.mdx @@ -35,6 +35,7 @@ - [OpenPrompt](https://github.com/thunlp/OpenPrompt) - [OpenPlayground](https://nat.dev/) - [OptimusPrompt](https://www.optimusprompt.ai) +- [Outlines](https://github.com/normal-computing/outlines) - [Playground](https://playgroundai.com) - [Portkey AI](https://portkey.ai/) - [Prodia](https://app.prodia.com/#/)