You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Prompt-Engineering-Guide/pages/papers.tr.mdx

175 lines
20 KiB
Markdown

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

# Makaleler
Aşağıdakiler, hızlı mühendislikle ilgili en son makalelerdir (yayınlanma tarihine göre sıralanmıştır). Bunu günlük olarak güncelliyoruz ve yeni makaleler geliyor. Bu makalelerin özetlerini her hafta yukarıdaki kılavuzlara dahil ediyoruz.
## Genel Bakış
- [Nature Language Reasoning, A Survey](https://arxiv.org/abs/2303.14725) (Mart 2023)
- [Augmented Language Models: a Survey](https://arxiv.org/abs/2302.07842) (Şubat 2023)
- [A Survey for In-context Learning](https://arxiv.org/abs/2301.00234) (Aralık 2022)
- [Towards Reasoning in Large Language Models: A Survey](https://arxiv.org/abs/2212.10403) (Aralık 2022)
- [Reasoning with Language Model Prompting: A Survey](https://arxiv.org/abs/2212.09597) (Aralık 2022)
- [Emergent Abilities of Large Language Models](https://arxiv.org/abs/2206.07682) (Haziran 2022)
- [A Taxonomy of Prompt Modifiers for Text-To-Image Generation](https://arxiv.org/abs/2204.13988) (Nisan 2022)
- [Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing](https://arxiv.org/abs/2107.13586) (Temmuz 2021)
## Yaklaşımlar
- [Self-Refine: Iterative Refinement with Self-Feedback](https://arxiv.org/abs/2303.17651v1) (Mart 2023)
- [kNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference](https://arxiv.org/abs/2303.13824) (Mart 2023)
- [Visual-Language Prompt Tuning with Knowledge-guided Context Optimization](https://arxiv.org/abs/2303.13283) (Mart 2023)
- [Fairness-guided Few-shot Prompting for Large Language Models](https://arxiv.org/abs/2303.13217) (Mart 2023)
- [Context-faithful Prompting for Large Language Models](https://arxiv.org/abs/2303.11315) (Mart 2023)
- [Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning](https://arxiv.org/abs/2303.10475) (Mart 2023)
- [UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation](https://arxiv.org/abs/2303.08518) (Mart 2023)
- [Model-tuning Via Prompts Makes NLP Models Adversarially Robust](https://arxiv.org/abs/2303.07320) (Mart 2023)
- [Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer](https://arxiv.org/abs/2303.03922) (Mart 2023)
- [CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification](https://arxiv.org/abs/2303.03628) (Mart 2023)
- [Larger language models do in-context learning differently](https://arxiv.org/abs/2303.03846) (Mart 2023)
- [OpenICL: An Open-Source Framework for In-context Learning](https://arxiv.org/abs/2303.02913) (Mart 2023)
- [Dynamic Prompting: A Unified Framework for Prompt Tuning](https://arxiv.org/abs/2303.02909) (Mart 2023)
- [Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning](https://arxiv.org/abs/2303.02861) (Mart 2023)
- [Effectiveness of Data Augmentation for Prefix Tuning with Limited Data](https://arxiv.org/abs/2303.02577) (Mart 2023)
- [Mixture of Soft Prompts for Controllable Data Generation](https://arxiv.org/abs/2303.01580) (Mart 2023)
- [Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners](https://arxiv.org/abs/2303.02151) (Mart 2023)
- [How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks](https://arxiv.org/abs/2303.00293) (Mart 2023)
- [Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT](https://arxiv.org/pdf/2302.10198.pdf) (Şubat 2023)
- [EvoPrompting: Language Models for Code-Level Neural Architecture Search](https://arxiv.org/abs/2302.14838) (Şubat 2023)
- [In-Context Instruction Learning](https://arxiv.org/abs/2302.14691) (Şubat 2023)
- [Chain of Hindsight Aligns Language Models with Feedback](https://arxiv.org/abs/2302.02676) (Şubat 2023)
- [Language Is Not All You Need: Aligning Perception with Language Models](https://arxiv.org/abs/2302.14045) (Şubat 2023)
- [Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data](https://arxiv.org/abs/2302.12822) (Şubat 2023)
- [Active Prompting with Chain-of-Thought for Large Language Models](https://arxiv.org/abs/2302.12246) (Şubat 2023)
- [More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models](https://arxiv.org/abs/2302.12173) (Şubat 2023)
- [A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT](https://arxiv.org/abs/2302.11382) (Şubat 2023)
- [Guiding Large Language Models via Directional Stimulus Prompting](https://arxiv.org/abs/2302.11520) (Şubat 2023)
- [How Does In-Context Learning Help Prompt Tuning?](https://arxiv.org/abs/2302.11521) (Şubat 2023)
- [Scalable Prompt Generation for Semi-supervised Learning with Language Models](https://arxiv.org/abs/2302.09236) (Şubat 2023)
- [Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints](https://arxiv.org/abs/2302.09185) (Şubat 2023)
- [À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting](https://arxiv.org/abs/2302.07994) (Şubat 2023)
- [GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks](https://arxiv.org/abs/2302.08043) (Şubat 2023)
- [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (Şubat 2023)
- [SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains](https://arxiv.org/abs/2302.06868) (Şubat 2023)
- [Evaluating the Robustness of Discrete Prompts](https://arxiv.org/abs/2302.05619) (Şubat 2023)
- [Compositional Exemplars for In-context Learning](https://arxiv.org/abs/2302.05698) (Şubat 2023)
- [Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery](https://arxiv.org/abs/2302.03668) (Şubat 2023)
- [Multimodal Chain-of-Thought Reasoning in Language Models](https://arxiv.org/abs/2302.00923) (Şubat 2023)
- [Large Language Models Can Be Easily Distracted by Irrelevant Context](https://arxiv.org/abs/2302.00093) (Şubat 2023)
- [Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models](https://arxiv.org/abs/2302.00618) (Şubat 2023)
- [Progressive Prompts: Continual Learning for Language Models](https://arxiv.org/abs/2301.12314) (Ocak 2023)
- [Batch Prompting: Efficient Inference with LLM APIs](https://arxiv.org/abs/2301.08721) (Ocak 2023)
- [Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP](https://arxiv.org/abs/2212.14024) (Aralık 2022)
- [On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning](https://arxiv.org/abs/2212.08061) (Aralık 2022)
- [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073) (Aralık 2022)
- [Successive Prompting for Decomposing Complex Questions](https://arxiv.org/abs/2212.04092) (Aralık 2022)
- [Large Language Models are reasoners with Self-Verification](https://arxiv.org/abs/2212.09561v1) (Aralık 2022)
- [Discovering Language Model Behaviors with Model-Written Evaluations](https://arxiv.org/abs/2212.09251) (Aralık 2022)
- [Structured Prompting: Scaling In-Context Learning to 1,000 Examples](https://arxiv.org/abs/2212.06713) (Aralık 2022)
- [PAL: Program-aided Language Models](https://arxiv.org/abs/2211.10435) (Kasım 2022)
- [Large Language Models Are Human-Level Prompt Engineers](https://arxiv.org/abs/2211.01910) (Kasım 2022)
- [Ignore Previous Prompt: Attack Techniques For Language Models](https://arxiv.org/abs/2211.09527) (Kasım 2022)
- [Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods](https://arxiv.org/abs/2210.07321) (Kasım 2022)
- [Teaching Algorithmic Reasoning via In-context Learning](https://arxiv.org/abs/2211.09066) (Kasım 2022)
- [Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference](https://arxiv.org/abs/2211.11875) (Kasım 2022)
- [Ask Me Anything: A simple strategy for prompting language models](https://paperswithcode.com/paper/ask-me-anything-a-simple-strategy-for) (Ekim 2022)
- [Recitation-Augmented Language Models](https://arxiv.org/abs/2210.01296) (Ekim 2022)
- [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) (Ekim 2022)
- [Prompting GPT-3 To Be Reliable](https://arxiv.org/abs/2210.09150) (Ekim 2022)
- [Decomposed Prompting: A Modular Approach for Solving Complex Tasks](https://arxiv.org/abs/2210.02406) (Ekim 2022)
- [Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought](https://arxiv.org/abs/2210.01240v3) (Ekim 2022)
- [Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples](https://arxiv.org/abs/2209.02128) (Eylül 2022)
- [Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning](https://arxiv.org/abs/2209.14610) (Eylül 2022)
- [Promptagator: Few-shot Dense Retrieval From 8 Examples](https://arxiv.org/abs/2209.11755) (Eylül 2022)
- [Atlas: Few-shot Learning with Retrieval Augmented Language Models](https://arxiv.org/abs/2208.03299) (Kasım 2022)
- [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/abs/2207.05987) (Temmuz 2022)
- [On the Advance of Making Language Models Better Reasoners](https://arxiv.org/abs/2206.02336) (Haziran 2022)
- [Large Language Models are Zero-Shot Reasoners](https://arxiv.org/abs/2205.11916) (Mayıs 2022)
- [Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations](https://arxiv.org/abs/2205.11822) (Mayıs 2022)
- [MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning](https://arxiv.org/abs/2205.00445) (Mayıs 2022)
- [PPT: Pre-trained Prompt Tuning for Few-shot Learning](https://aclanthology.org/2022.acl-long.576/) (Mayıs 2022)
- [Toxicity Detection with Generative Prompt-based Inference](https://arxiv.org/abs/2205.12390) (Mayıs 2022)
- [Learning to Transfer Prompts for Text Generation](https://arxiv.org/abs/2205.01543) (Mayıs 2022)
- [The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning](https://arxiv.org/abs/2205.03401) (Mayıs 2022)
- [A Taxonomy of Prompt Modifiers for Text-To-Image Generation](https://arxiv.org/abs/2204.13988) (Nisan 2022)
- [PromptChainer: Chaining Large Language Model Prompts through Visual Programming](https://arxiv.org/abs/2203.06566) (Mayıs 2022)
- [Self-Consistency Improves Chain of Thought Reasoning in Language Models](https://arxiv.org/abs/2203.11171) (Mart 2022)
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
- [Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?](https://arxiv.org/abs/2202.12837) (Şubat 2022)
- [Chain of Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903) (Ocak 2022)
- [Show Your Work: Scratchpads for Intermediate Computation with Language Models](https://arxiv.org/abs/2112.00114) (Kasım 2021)
- [AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts](https://arxiv.org/abs/2110.01691) (Ekim 2021)
- [Generated Knowledge Prompting for Commonsense Reasoning](https://arxiv.org/abs/2110.08387) (Ekim 2021)
- [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207) (Ekim 2021)
- [Reframing Instructional Prompts to GPTk's Language](https://arxiv.org/abs/2109.07830) (Eylül 2021)
- [Design Guidelines for Prompt Engineering Text-to-Image Generative Models](https://arxiv.org/abs/2109.06977) (Eylül 2021)
- [Making Pre-trained Language Models Better Few-shot Learners](https://aclanthology.org/2021.acl-long.295) (Ağustos 2021)
- [Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity](https://arxiv.org/abs/2104.08786) (Nisan 2021)
- [BERTese: Learning to Speak to BERT](https://aclanthology.org/2021.eacl-main.316) (Nisan 2021)
- [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691) (Nisan 2021)
- [Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm](https://arxiv.org/abs/2102.07350) (Şubat 2021)
- [Calibrate Before Use: Improving Few-Shot Performance of Language Models](https://arxiv.org/abs/2102.09690) (Şubat 2021)
- [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://arxiv.org/abs/2101.00190) (Ocak 2021)
- [Learning to Generate Task-Specific Adapters from Task Description](https://arxiv.org/abs/2101.00420) (Ocak 2021)
- [Making Pre-trained Language Models Better Few-shot Learners](https://arxiv.org/abs/2012.15723) (Aralık 2020)
- [Learning from Task Descriptions](https://aclanthology.org/2020.emnlp-main.105/) (Kasım 2020)
- [AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts](https://arxiv.org/abs/2010.15980) (Ekim 2020)
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165) (Mayıs 2020)
- [How Can We Know What Language Models Know?](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00324/96460/How-Can-We-Know-What-Language-Models-Know) (Temmuz 2020)
- [Scaling Laws for Neural Language Models](https://arxiv.org/abs/2001.08361) (Ocak 2020)
## Uygulamalar
- [PaLM 2 Technical Report](https://ai.google/static/documents/palm2techreport.pdf) (Mayıs 2023)
- [BloombergGPT: A Large Language Model for Finance](https://arxiv.org/abs/2303.17564) (Mart 2023)
- [Medical Intervention Duration Estimation Using Language-enhanced Transformer Encoder with Medical Prompts](https://arxiv.org/abs/2303.17408) (Mart 2023)
- [Soft-prompt tuning to predict lung cancer using primary care free-text Dutch medical notes](https://arxiv.org/abs/2303.15846) (Mart 2023)
- [TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs](https://arxiv.org/abs/2303.16434) (Mart 2023)
- [Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning](https://arxiv.org/abs/2303.16445) (Mart 2023)
- [Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese Machine Translation: A Case Study on Attributive Clauses](https://arxiv.org/abs/2303.15587) (Mart 2023)
- [Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning](https://arxiv.org/abs/2303.14375) (Mart 2023)
- [Debiasing Scores and Prompts of 2D Diffusion for Robust Text-to-3D Generation](https://arxiv.org/abs/2303.15413) (Mart 2023)
- [Zero-shot Model Diagnosis](https://arxiv.org/abs/2303.15441#) (Mart 2023)
- [Prompting Large Language Models to Generate Code-Mixed Texts: The Case of South East Asian Languages](https://arxiv.org/abs/2303.13592) (Mart 2023)
- [SPeC: A Soft Prompt-Based Calibration on Mitigating Performance Variability in Clinical Notes Summarization](https://arxiv.org/abs/2303.13035) (Mart 2023)
- [Large Language Models and Simple, Stupid Bugs](https://arxiv.org/abs/2303.11455) (Mart 2023)
- [Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses?](https://arxiv.org/abs/2303.09325) (Mart 2023)
- [SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models](https://arxiv.org/abs/2303.08896) (Mart 2023)
- [Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification](https://arxiv.org/abs/2303.07142) (Mart 2023)
- [ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction](https://arxiv.org/abs/2303.05063) (Mart 2023)
- [MathPrompter: Mathematical Reasoning using Large Language Models](https://arxiv.org/abs/2303.05398) (Mart 2023)
- [Prompt-Based Learning for Thread Structure Prediction in Cybersecurity Forums](https://arxiv.org/abs/2303.05400) (Mart 2023)
- [Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting](https://arxiv.org/abs/2303.03199) (Mart 2023)
- [Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering](https://arxiv.org/abs/2303.01903) (Mart 2023)
- [Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis](https://arxiv.org/abs/2303.00815) (Mart 2023)
- [SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks](https://arxiv.org/abs/2303.00733) (Mart 2023)
- [Goal Driven Discovery of Distributional Differences via Language Descriptions](https://arxiv.org/abs/2302.14233) (Şubat 2023)
- [Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models](https://arxiv.org/abs/2302.13439) (Şubat 2023)
- [TabGenie: A Toolkit for Table-to-Text Generation](https://arxiv.org/abs/2302.14169) (Şubat 2023)
- [SGL-PT: A Strong Graph Learner with Graph Prompt Tuning](https://arxiv.org/abs/2302.12449) (Şubat 2023)
- [Few-Shot Table-to-Text Generation with Prompt-based Adapter](https://arxiv.org/abs/2302.12468) (Şubat 2023)
- [Language Models Are Few-shot Learners for Prognostic Prediction](https://arxiv.org/abs/2302.12692) (Şubat 2023)
- [STA: Self-controlled Text Augmentation for Improving Text Classifications](https://arxiv.org/abs/2302.12784) (Şubat 2023)
- [Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback](https://arxiv.org/abs/2302.12813) (Şubat 2023)
- [How Generative AI models such as ChatGPT can be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study](https://arxiv.org/abs/2302.10916) (Şubat 2023)
- [Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales](https://arxiv.org/abs/2302.08961) (Şubat 2023)
- [LabelPrompt: Effective Prompt-based Learning for Relation Classification](https://arxiv.org/abs/2302.08068) (Şubat 2023)
- [Language Model Crossover: Variation through Few-Shot Prompting](https://arxiv.org/abs/2302.09236) (Şubat 2023)
- [Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition](https://arxiv.org/abs/2302.08102) (Şubat 2023)
- [The Capacity for Moral Self-Correction in Large Language Models](https://arxiv.org/abs/2302.07459) (Şubat 2023)
- [Prompting for Multimodal Hateful Meme Classification](https://arxiv.org/abs/2302.04156) (Şubat 2023)
- [PLACES: Prompting Language Models for Social Conversation Synthesis](https://arxiv.org/abs/2302.03269) (Şubat 2023)
- [Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation](https://arxiv.org/abs/2302.01441) (Şubat 2023)
- [Crawling the Internal Knowledge-Base of Language Models](https://arxiv.org/abs/2301.12810) (Ocak 2023)
- [Legal Prompt Engineering for Multilingual Legal Judgement Prediction](https://arxiv.org/abs/2212.02199) (Aralık 2022)
- [Investigating Prompt Engineering in Diffusion Models](https://arxiv.org/abs/2211.15462) (Kasım 2022)
- [Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering](https://arxiv.org/abs/2209.09513v2) (Eylül 2022)
- [Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language](https://arxiv.org/abs/2210.15157) (Ekim 2022)
- [Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic?](https://arxiv.org/abs/2210.14699) (Ekim 2022)
- [Plot Writing From Scratch Pre-Trained Language Models](https://aclanthology.org/2022.inlg-main.5) (Temmuz 2022)
- [Survey of Hallucination in Natural Language Generation](https://arxiv.org/abs/2202.03629) (Şubat 2022)
## Koleksiyonlar
- [Chain-of-Thought Papers](https://github.com/Timothyxxx/Chain-of-ThoughtsPapers)
- [Papers with Code](https://paperswithcode.com/task/prompt-engineering)
- [Prompt Papers](https://github.com/thunlp/PromptPapers#papers)