mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-10 01:13:36 +00:00
added guides
This commit is contained in:
parent
fc0b16d707
commit
9caebe46c0
@ -2,6 +2,7 @@
|
|||||||
"index": "Prompt Engineering",
|
"index": "Prompt Engineering",
|
||||||
"introduction": "Introduction",
|
"introduction": "Introduction",
|
||||||
"techniques": "Techniques",
|
"techniques": "Techniques",
|
||||||
|
"guides": "Guides",
|
||||||
"applications": "Applications",
|
"applications": "Applications",
|
||||||
"prompts": "Prompt Hub",
|
"prompts": "Prompt Hub",
|
||||||
"models": "Models",
|
"models": "Models",
|
||||||
|
3
pages/guides/_meta.en.json
Normal file
3
pages/guides/_meta.en.json
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"optimizing-prompts": "Optimizing Prompts"
|
||||||
|
}
|
39
pages/guides/optimizing-prompts.en.mdx
Normal file
39
pages/guides/optimizing-prompts.en.mdx
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
## Crafting Effective Prompts for LLMs
|
||||||
|
|
||||||
|
<iframe width="100%"
|
||||||
|
height="415px"
|
||||||
|
src="https://www.youtube.com/embed/8KNKjBBm1Kw?si=puEJrGFe9XSu8O-A"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
||||||
|
allowFullScreen
|
||||||
|
/>
|
||||||
|
|
||||||
|
Large Language Models (LLMs) offer immense power for various tasks, but their effectiveness hinges on the quality of the prompts. This blog post summarize important aspects of designing effective prompts to maximize LLM performance.
|
||||||
|
|
||||||
|
### Key Considerations for Prompt Design
|
||||||
|
|
||||||
|
**Specificity and Clarity:**
|
||||||
|
Just like giving instructions to a human, prompts should clearly articulate the desired outcome. Ambiguity can lead to unexpected or irrelevant outputs.
|
||||||
|
|
||||||
|
**Structured Inputs and Outputs:**
|
||||||
|
Structuring inputs using formats like JSON or XML can significantly enhance an LLM's ability to understand and process information. Similarly, specifying the desired output format (e.g., a list, paragraph, or code snippet) improves response relevance.
|
||||||
|
|
||||||
|
**Delimiters for Enhanced Structure:**
|
||||||
|
Utilizing special characters as delimiters within prompts can further clarify the structure and segregate different elements, improving the model's understanding.
|
||||||
|
|
||||||
|
**Task Decomposition for Complex Operations:**
|
||||||
|
Instead of presenting LLMs with a monolithic prompt encompassing multiple tasks, breaking down complex processes into simpler subtasks significantly improves clarity and performance. This allows the model to focus on each subtask individually, ultimately leading to a more accurate overall outcome.
|
||||||
|
|
||||||
|
### Advanced Prompting Strategies
|
||||||
|
|
||||||
|
**Few-Shot Prompting:**
|
||||||
|
Providing the LLM with a few examples of desired input-output pairs guides it towards generating higher-quality responses by demonstrating the expected pattern. Learn more about few-shot prompting [here](https://www.promptingguide.ai/techniques/fewshot).
|
||||||
|
|
||||||
|
**Chain-of-Thought Prompting:**
|
||||||
|
Encouraging the model to "think step-by-step" by explicitly prompting it to break down complex tasks into intermediate reasoning steps enhances its ability to solve problems that require logical deduction. Learn more about chain-of-thought prompting [here](https://www.promptingguide.ai/techniques/cot).
|
||||||
|
|
||||||
|
**ReAct (Reason + Act):**
|
||||||
|
This method focuses on eliciting advanced reasoning, planning, and even tool use from the LLM. By structuring prompts to encourage these capabilities, developers can unlock more sophisticated and powerful applications. Learn more about ReAct [here](https://www.promptingguide.ai/techniques/react).
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
|
||||||
|
Effective prompt design is crucial for harnessing the full potential of LLMs. By adhering to best practices like specificity, structured formatting, task decomposition, and leveraging advanced techniques like few-shot, chain-of-thought, and ReAct prompting, developers can significantly improve the quality, accuracy, and complexity of outputs generated by these powerful LLMs.
|
Loading…
Reference in New Issue
Block a user