mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-19 21:25:35 +00:00
Merge pull request #291 from dair-ai/ritvik-sep17
Updated papers.en.mdx
This commit is contained in:
commit
61e204da64
@ -22,6 +22,10 @@ The following are the latest papers (sorted by release date) on prompt engineeri
|
||||
|
||||
## Approaches
|
||||
|
||||
- [Chain-of-Verification Reduces Hallucination in Large Language Models](https://arxiv.org/abs/2309.11495) (September 2023)
|
||||
- [Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers](https://arxiv.org/abs/2309.08532) (September 2023)
|
||||
- [From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting](https://arxiv.org/abs/2309.04269) (September 2023)
|
||||
- [Re-Reading Improves Reasoning in Language Models](https://arxiv.org/abs/2309.06275) (September 2023)
|
||||
- [Graph of Thoughts: Solving Elaborate Problems with Large Language Models](https://arxiv.org/abs/2308.09687v2) (August 2023)
|
||||
- [Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding](https://arxiv.org/abs/2307.15337) (July 2023)
|
||||
- [Focused Prefix Tuning for Controllable Text Generation](https://arxiv.org/abs/2306.00369) (June 2023)
|
||||
@ -169,6 +173,33 @@ The following are the latest papers (sorted by release date) on prompt engineeri
|
||||
|
||||
## Applications
|
||||
|
||||
- [Graph Neural Prompting with Large Language Models](https://arxiv.org/abs/2309.15427) (September 2023)
|
||||
- [Large Language Model Alignment: A Survey](https://arxiv.org/abs/2309.15025) (September 2023)
|
||||
- [Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic](https://arxiv.org/abs/2309.13339) (September 2023)
|
||||
- [A Practical Survey on Zero-shot Prompt Design for In-context Learning](https://arxiv.org/abs/2309.13205) (September 2023)
|
||||
- [EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning](https://arxiv.org/abs/2309.10687) (September 2023)
|
||||
- [Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning](https://arxiv.org/abs/2309.10359) (September 2023)
|
||||
- [PolicyGPT: Automated Analysis of Privacy Policies with Large Language Models](https://arxiv.org/abs/2309.10238) (September 2023)
|
||||
- [LLM4Jobs: Unsupervised occupation extraction and standardization leveraging Large Language Models](https://arxiv.org/abs/2309.09708) (September 2023)
|
||||
- [Summarization is (Almost) Dead](https://arxiv.org/abs/2309.09558) (September 2023)
|
||||
- [Investigating Zero- and Few-shot Generalization in Fact Verification](https://arxiv.org/abs/2309.09444) (September 2023)
|
||||
- [Performance of the Pre-Trained Large Language Model GPT-4 on Automated Short Answer Grading](https://arxiv.org/abs/2309.09338) (September 2023)
|
||||
- [Contrastive Decoding Improves Reasoning in Large Language Models](https://arxiv.org/abs/2309.09117) (September 2023)
|
||||
- [Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data?](https://arxiv.org/abs/2309.08963) (September 2023)
|
||||
- [Neural Machine Translation Models Can Learn to be Few-shot Learners](https://arxiv.org/abs/2309.08590) (September 2023)
|
||||
- [Chain-of-Thought Reasoning is a Policy Improvement Operator](https://arxiv.org/abs/2309.08589) (September 2023)
|
||||
- [ICLEF: In-Context Learning with Expert Feedback for Explainable Style Transfer](https://arxiv.org/abs/2309.08583) (September 2023)
|
||||
- [When do Generative Query and Document Expansions Fail? A Comprehensive Study Across Methods, Retrievers, and Datasets](https://arxiv.org/abs/2309.08541) (September 2023)
|
||||
- [Using Large Language Models for Knowledge Engineering (LLMKE): A Case Study on Wikidata](https://arxiv.org/abs/2309.08491) (September 2023)
|
||||
- [Self-Consistent Narrative Prompts on Abductive Natural Language Inference](https://arxiv.org/abs/2309.08303) (September 2023)
|
||||
- [Investigating Answerability of LLMs for Long-Form Question Answering](https://arxiv.org/abs/2309.08210) (September 2023)
|
||||
- [PromptTTS++: Controlling Speaker Identity in Prompt-Based Text-to-Speech Using Natural Language Descriptions](https://arxiv.org/abs/2309.08140) (September 2023)
|
||||
- [An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing](https://arxiv.org/abs/2309.08008) (September 2023)
|
||||
- [Leveraging Contextual Information for Effective Entity Salience Detection](https://arxiv.org/abs/2309.07990) (September 2023)
|
||||
- [Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts](https://arxiv.org/abs/2309.06135) (September 2023)
|
||||
- [PACE: Prompting and Augmentation for Calibrated Confidence Estimation with GPT-4 in Cloud Incident Root Cause Analysis](https://arxiv.org/abs/2309.05833) (September 2023)
|
||||
- [From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting](https://arxiv.org/abs/2309.04269) (September 2023)
|
||||
- [Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models](https://arxiv.org/abs/2309.04461) (September 2023)
|
||||
- [Zero-Resource Hallucination Prevention for Large Language Models](https://arxiv.org/abs/2309.02654) (September 2023)
|
||||
- [Certifying LLM Safety against Adversarial Prompting](https://arxiv.org/abs/2309.02772) (September 2023)
|
||||
- [Improving Code Generation by Dynamic Temperature Sampling](https://arxiv.org/abs/2309.02772) (September 2023)
|
||||
|
Loading…
Reference in New Issue
Block a user