Prompt Engineering Cheat Sheet
Prompt Engineering Cheat Sheet
Your complete reference for prompt engineering. Bookmark this page and use it whenever you need to write an effective prompt quickly.
Techniques Comparison Table
| Technique | How It Works | When to Use | Example Trigger |
|---|---|---|---|
| Zero-shot | Just ask, no examples | Simple, well-defined tasks | "Write a function that..." |
| Few-shot | Provide 2-5 examples first | Need specific format or style | "Here are examples: ... Now do this:" |
| Chain-of-thought | Ask AI to think step by step | Reasoning, math, debugging | "Think step by step" |
| Role prompting | Assign a persona | Need specialized expertise | "You are a senior engineer..." |
| System prompts | Set behavior via API | Building AI-powered apps | role: "system" in API calls |
| Prompt chaining | Multi-step pipeline | Complex, multi-phase tasks | Output of step 1 feeds step 2 |
| Constrained output | Force specific format | Need JSON, tables, or lists | "Return only valid JSON" |
The Universal Prompt Template
Role: You are a [specific role with relevant expertise].
Task: [Clear action verb] [specific deliverable].
Context:
- [Technology stack / environment]
- [Current situation / relevant background]
- [Existing code, data, or errors]
Format:
- [How the output should be structured]
- [Length, style, or presentation requirements]
Constraints:
- [What to avoid]
- [Limits and boundaries]
- [Edge cases to handle]
Template Library for Common Tasks
Code Generation
Write a [language] function called [name] that [what it does].
Parameters:
- [param1]: [type] — [description]
- [param2]: [type] — [description]
Returns: [type] — [description]
Requirements:
- [requirement 1]
- [requirement 2]
- Handle edge cases: [list them]
Include: type annotations, JSDoc comments, and 3 test cases.
Do not use external libraries.
Code Review
Review this [language] code for:
1. Bugs and logical errors
2. Security vulnerabilities
3. Performance issues
4. Readability and best practices
Code:
[paste code]
Format your response as:
## Critical Issues (must fix)
## Warnings (should fix)
## Suggestions (nice to have)
## Score: X/10
Debugging
I'm getting this error in my [framework] app:
[paste full error message and stack trace]
Here's the relevant code:
[paste code]
What I expected: [describe expected behavior]
What happened: [describe actual behavior]
Walk through the code step by step to find the root cause.
Then provide the fix with an explanation of why it works.
Explain a Concept
Explain [concept] to a [audience level: beginner/intermediate/senior].
Requirements:
- Start with a one-sentence definition
- Use a real-world analogy
- Show a practical code example in [language]
- List 3 common mistakes people make
- End with "when to use this" and "when NOT to use this"
Keep it under [word count] words.
Compare Technologies
Compare [option A] vs [option B] for [specific use case].
Format as a table:
| Criteria | [Option A] | [Option B] |
| Learning curve | | |
| Performance | | |
| Ecosystem | | |
| Cost | | |
| Best for | | |
Follow with a recommendation (3 sentences max) based on this context:
[describe your situation]
Data Extraction
Extract the following fields from the text below.
Return valid JSON. Use null for missing fields.
Fields:
- [field]: [type] — [description]
- [field]: [type] — [description]
Text:
"""
[paste text]
"""
Content Rewriting
Rewrite the following [content type] with these changes:
- Tone: [professional/casual/technical/friendly]
- Audience: [who will read this]
- Length: [shorter/same/longer — target word count]
- Preserve: [what to keep unchanged]
- Change: [what to modify]
Original:
"""
[paste content]
"""
API Documentation
Document this API endpoint:
Method: [GET/POST/PUT/DELETE]
Path: [/api/v1/resource]
Purpose: [what it does]
Include:
1. Description (one sentence)
2. Authentication requirements
3. Request parameters (table: name, type, required, description)
4. Request body schema (if applicable)
5. Success response (status code + example JSON)
6. Error responses (table: status code, meaning)
7. curl example
8. TypeScript fetch example
Meta-Prompts: AI Prompts for Improving Prompts
These prompts help you write better prompts using AI itself.
Prompt Optimizer
I wrote this prompt but the AI output wasn't good enough:
"""
[paste your original prompt]
"""
The problem with the output was: [describe what was wrong]
Rewrite my prompt to fix this issue. Use the Role/Task/Context/Format/Constraints
framework. Explain what you changed and why.
Prompt Generator
I need to accomplish this task: [describe task]
Write an optimized prompt that I can use with an AI model.
The prompt should:
- Use the Role/Task/Context/Format/Constraints structure
- Be specific enough to get a useful response on the first try
- Include any necessary examples (few-shot)
- Add guardrails to prevent common mistakes
Output: The prompt I should copy and paste, ready to use.
Prompt Evaluator
Evaluate this prompt on a scale of 1-10 across these criteria:
Prompt:
"""
[paste prompt]
"""
Criteria:
1. Clarity — Is the task unambiguous?
2. Specificity — Does it provide enough context?
3. Format — Does it define the expected output?
4. Constraints — Does it prevent common failure modes?
5. Completeness — Is anything missing?
For each criterion scored below 8, suggest a specific improvement.
Prompt Engineering Decision Tree
Use this to choose the right approach for any task:
START
|
v
Is it a simple, common task?
|
YES --> Use ZERO-SHOT (just ask)
| |
| v
| Output good enough?
| YES --> Done
| NO --> Go to "Output problems"
|
NO --> Does it need a specific output format?
|
YES --> Use FEW-SHOT (provide 2-3 examples)
| |
| v
| Output good enough?
| YES --> Done
| NO --> Go to "Output problems"
|
NO --> Does it require reasoning or multi-step logic?
|
YES --> Use CHAIN-OF-THOUGHT ("think step by step")
NO --> Use ROLE PROMPTING + clear task description
OUTPUT PROBLEMS
|
v
What's wrong?
|
Too vague --> Add more CONTEXT (tech stack, numbers, specifics)
Wrong answer --> Add CONTEXT + ask AI to verify its answer
Hallucinated --> Provide REFERENCE material + add guardrails
Too verbose --> Add FORMAT constraints (word limit, bullet points)
Wrong format --> Add explicit FORMAT instructions + example
Inconsistent --> Add FEW-SHOT examples + lower temperature
|
v
Still not working?
--> Try a different MODEL (larger/more capable)
--> Use PROMPT CHAINING (break into smaller steps)
--> Use the META-PROMPT optimizer (ask AI to improve your prompt)
Quick Tips
Do
- Be specific about what you want
- Include relevant context (tech stack, constraints, audience)
- Specify the output format explicitly
- Provide examples when you need consistent formatting
- Test with multiple inputs before relying on a prompt
- Version and iterate on your prompts
- Use lower temperature (0-0.3) for factual/code tasks
- Use higher temperature (0.7-1.0) for creative tasks
Don't
- Don't write vague prompts and hope for the best
- Don't assume the AI knows your tech stack or project context
- Don't skip format instructions — you'll get inconsistent output
- Don't use AI output without verifying critical facts
- Don't use the same prompt for every model — adjust for strengths
- Don't write 2000-word system prompts — focus on 5-10 key rules
- Don't chain more than 4-5 steps — errors compound
Model Selection Guide
| Task | Best Model Choice | Why |
|---|---|---|
| Simple text classification | GPT-4o-mini, Claude Haiku | Fast, cheap, accurate enough |
| Code generation | GPT-4o, Claude Sonnet | Good balance of quality and speed |
| Complex reasoning | Claude Opus, GPT-4o | Strongest analytical capability |
| Large document analysis | Claude (200K context) | Largest context window |
| Structured data extraction | GPT-4o with JSON mode | Built-in JSON formatting |
| Creative writing | Claude Sonnet/Opus | Nuanced, natural language |
| Real-time / streaming | Any model with streaming API | Low latency for UX |
| High volume / batch | GPT-4o-mini, Claude Haiku | Cost-effective at scale |
Temperature Guide
| Temperature | Behavior | Use For |
|---|---|---|
| 0.0 | Deterministic, same output every time | Data extraction, classification, code |
| 0.3 | Slightly varied, mostly predictable | Code generation, technical writing |
| 0.7 | Balanced creativity and coherence | General writing, brainstorming |
| 1.0 | Highly creative, unpredictable | Creative writing, ideation |
| 1.5+ | Wild, often incoherent | Rarely useful — experimental only |
Keep Improving
Prompt engineering is a skill that improves with practice. Every time you use AI:
- Notice when the output isn't what you wanted
- Diagnose which failure mode caused the problem
- Apply the targeted fix from this book
- Save the improved prompt for reuse
- Share what works with your team
The best prompt engineers aren't the ones who memorize templates. They are the ones who understand why a prompt works and can adapt it to any situation.
Happy prompting!