mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-04 12:00:10 +00:00
Updated papers.en.mdx
This commit is contained in:
parent
06fb730486
commit
94c6734ec8
@ -22,6 +22,7 @@ The following are the latest papers (sorted by release date) on prompt engineeri
|
||||
|
||||
## Approaches
|
||||
|
||||
- [Re-Reading Improves Reasoning in Language Models](https://arxiv.org/abs/2309.06275) (September 2023)
|
||||
- [Graph of Thoughts: Solving Elaborate Problems with Large Language Models](https://arxiv.org/abs/2308.09687v2) (August 2023)
|
||||
- [Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding](https://arxiv.org/abs/2307.15337) (July 2023)
|
||||
- [Focused Prefix Tuning for Controllable Text Generation](https://arxiv.org/abs/2306.00369) (June 2023)
|
||||
@ -169,6 +170,10 @@ The following are the latest papers (sorted by release date) on prompt engineeri
|
||||
|
||||
## Applications
|
||||
|
||||
- [Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts](https://arxiv.org/abs/2309.06135) (September 2023)
|
||||
- [PACE: Prompting and Augmentation for Calibrated Confidence Estimation with GPT-4 in Cloud Incident Root Cause Analysis](https://arxiv.org/abs/2309.05833) (September 2023)
|
||||
- [From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting](https://arxiv.org/abs/2309.04269) (September 2023)
|
||||
- [Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models](https://arxiv.org/abs/2309.04461) (September 2023)
|
||||
- [Zero-Resource Hallucination Prevention for Large Language Models](https://arxiv.org/abs/2309.02654) (September 2023)
|
||||
- [Certifying LLM Safety against Adversarial Prompting](https://arxiv.org/abs/2309.02772) (September 2023)
|
||||
- [Improving Code Generation by Dynamic Temperature Sampling](https://arxiv.org/abs/2309.02772) (September 2023)
|
||||
|
Loading…
Reference in New Issue
Block a user