In this article, you’ll find prompting techniques that help define better prompts for LLMs.
GENERAL TECHNIQUES
1. Clear and Specific Instructions
- Explanation: Avoid vague questions. Be explicit about the task, audience, and format.
- Example: "Explain the blockchain to a 12-year-old in under 100 words."
2. Use Role-based Prompting
- Explanation: Give the model a persona to guide tone, expertise, and response style.
- Example: "You are a cybersecurity analyst. Describe common phishing attacks."
3. Specify Output Format
- Explanation: Dictate how the output should be structured (JSON, list, table, etc.).
- Example: "Return a JSON object with fields: summary, risk_level."
4. Use Input/Output Delimiters
- Explanation: Clearly separate instructions, input, and expected output.
- Example:
Input:
'''
The sun is a star.
'''
Task: Summarize the above in one sentence.
5. Provide Examples (Few-shot Prompting)
- Explanation: Include example inputs and outputs to teach the task format.
- Example:
Q: 5 + 2
A: 7
Q: 3 + 4
A: 7
6. Avoid Ambiguity
- Explanation: Be precise in wording to prevent misinterpretation.
- Example: Replace "bank" with "financial institution" or "riverbank."
7. Use Task Separation
- Explanation: Break complex tasks into distinct labeled steps.
- Example:
Step 1: Identify the key concept.
Step 2: Write a simple analogy.
Step 3: Summarize in 2 sentences.
8. Constrain Response Length
- Explanation: Prevent verbose or off-topic answers by limiting response size.
- Example: "List 3 benefits, each under 10 words."
9. Prime with Domain Language
- Explanation: Use specialized vocabulary to signal the domain context.
- Example: Use "diagnosis" and "symptoms" in medical prompts.
10. Use Natural Language Continuations
- Explanation: Start prompts mid-document or mid-conversation to provide context.
- Example: "Hereâs how we handle production issues:"
11. Tell the Model What NOT to Do
- Explanation: Explicitly restrict behaviors you want to avoid.
- Example: "Do not include links or generic disclaimers."
-------------------------------------------------------------------------------
HALLUCINATION REDUCTION TECHNIQUES
12. Grounding with Context
- Explanation: Provide background documents or facts and instruct the model to use only those.
- Example: "Based only on the content below, answer the question."
13. Ask for Source Attribution
- Explanation: Require the model to cite where each fact came from.
- Example: "Include page or paragraph numbers with each claim."
14. Allow "I Don't Know"
- Explanation: Prevent guessing by allowing uncertainty.
- Example: "If unsure, say 'I don't know.' Don't make up facts."
15. Chain-of-Thought Reasoning
- Explanation: Ask the model to break down its reasoning before giving a final answer.
- Example: "Letâs think step by step."
16. Restrict Output to Given Context
- Explanation: Avoid modelâs internal knowledge by limiting response to provided material.
- Example: "Only answer using the article below."
17. Narrow Scope of Answer
- Explanation: Focus the prompt on a very specific aspect.
- Example: "List only the challenges, not the benefits."
18. Add a Verification Step
- Explanation: Ask the model to review and verify its previous answer.
- Example: "Check if the response matches the facts."
-------------------------------------------------------------------------------
TASK-SPECIFIC TECHNIQUES
-- Reasoning & Logic --
19. Use "Let's think step by step"
- Helps the model reason through problems logically.
20. Scratchpad Prompting
- Explanation: Allow the model to use notes or intermediate steps.
- Example:
Scratchpad:
- Add A and B
- Then divide by C
Final Answer:
21. Self-Consistency Prompting
- Explanation: Generate multiple answers and select the most consistent.
- Example: "Generate 3 answers. Choose the majority response."
-- Summarization --
22. Ask for Highlights or Surprises
- Focus on key insights, not a full rehash.
- Example: "List the 3 most surprising facts in the article."
23. Summarize in Segments
- Summarize paragraph-by-paragraph before producing the final output.
- Example:
Para 1 Summary:
Para 2 Summary:
Overall Summary:
-- Classification --
24. Specify Valid Labels
- Example: "Classify as one of: [Positive, Neutral, Negative]"
25. Provide Labeled Examples
- Example:
Text: "Great service!"
Sentiment: Positive
-- Extraction --
26. Template-based Extraction
- Example:
Name: ____
Age: ____
Diagnosis: ____
27. Use Start and End Markers
- Example:
<start>Name: John Doe<end>
-- Multilingual or Code Tasks --
28. Set Language Context
- Example: "Translate from English to French. Use informal tone."
29. Add Pseudocode or Comments
- Example:
// Image shows a dog jumping
"Describe the image."
-------------------------------------------------------------------------------
ADVANCED STRATEGIES
30. Decompose into Subprompts
- Explanation: Handle complex tasks by breaking them into sequential prompts.
31. Zero-shot Chain-of-Thought
- Combine reasoning with single-prompt solutions.
- Example: "To solve, first analyze the question, then provide an answer."
32. Retrieval-Augmented Generation (RAG)
- Explanation: Use external search or embedding tools to fetch relevant content dynamically.
33. Dynamic Prompt Assembly
- Explanation: Construct prompts on-the-fly based on user input or system state.
34. Meta-prompting
- Explanation: Ask the model how it would prompt itself for a task.
- Example: "You are a prompt engineer. How would you prompt yourself?"
35. Hybrid Prompting (Instruction + Few-shot)
- Combine direct instructions with worked examples.
36. Tune Temperature and Top-k
- Explanation: Adjust sampling settings for API-based models (lower = more deterministic).
37. Prompt Ensembling
- Explanation: Ask the same thing multiple ways, then merge or compare results.
38. Ask for Counterexamples
- Example: "Whatâs an example where this rule fails?"
39. Prompt Critique
- Ask the model to critique or analyze its own output.
40. Prompt Self-Reflection
- Ask: "What assumptions were made in the previous answer?"
No comments:
Post a Comment