AutomateMyJob
Back to BlogPrompt Engineering

Stop Getting Generic AI Responses: The Specificity Framework

David Park10 min read

Stop Getting Generic AI Responses: The Specificity Framework

"That's not what I wanted." If you've ever thought this after reading an AI response, the problem probably wasn't the AI—it was the prompt. Vague prompts produce vague answers. Generic inputs generate generic outputs. The solution is systematic specificity.

This guide introduces the Specificity Framework, a method for transforming wishy-washy prompts into laser-focused requests that get you exactly what you need.

Why This Matters

Consider what happens when you ask someone to "write something about productivity." They could write:

  • A personal essay about their morning routine
  • A research paper on workplace efficiency
  • A listicle of productivity apps
  • A philosophical treatise on the meaning of productive work

Without specificity, you've given them infinite directions to run. AI is no different—except it runs fast. It will generate something quickly, but that something may be nowhere near what you needed.

Specificity is the difference between:

  • Generic: "Write about marketing"
  • Specific: "Write a 300-word LinkedIn post explaining why small business owners should prioritize email marketing over social media, with 2 statistics and a clear call-to-action"

The second prompt can only be answered one way. That's the power of specificity.

The Specificity Framework

The framework has four levels, each adding more precision to your prompt:

Level 1: Define the What

Specify the exact deliverable you need.

Vague: "Write about project management" Specific: "Write a checklist for running effective project kickoff meetings"

Vague: "Help with my presentation" Specific: "Create an outline for a 15-minute presentation on Q3 sales results"

Key questions:

  • What type of content? (email, report, list, analysis)
  • What format? (paragraph, bullets, table)
  • What length? (word count, number of items)

Level 2: Define the Who

Specify who's involved—author perspective and audience.

Vague: "Write about cloud computing benefits" Specific: "Write about cloud computing benefits from the perspective of an IT director, for an audience of CFOs who are skeptical about migration costs"

Key questions:

  • Who is speaking/writing? (role, expertise level)
  • Who is the audience? (role, knowledge level, concerns)
  • What relationship exists between them?

Level 3: Define the Context

Specify the situation, constraints, and background.

Vague: "Write a rejection email" Specific: "Write a rejection email to a job candidate who made it to final rounds. We liked them but chose someone with more direct experience. We want them to apply again for future roles."

Key questions:

  • What's the situation? (background, what happened)
  • What constraints exist? (time, budget, requirements)
  • What's the desired outcome? (action, feeling, understanding)

Level 4: Define the Quality Markers

Specify what good looks like and what to avoid.

Vague: "Make it professional" Specific: "Use formal language but avoid jargon. Include data to support claims. Don't use superlatives like 'best' or 'amazing.' End with a concrete next step."

Key questions:

  • What must be included?
  • What should be avoided?
  • What does success look like?

Examples in Action

Example 1: Content Request

Before (Generic Prompt):

Write a blog post about remote work.

After (Specific Prompt):

Prompt
Write a 600-word blog post about remote work.

WHAT: Focus specifically on how to maintain team culture 
with distributed teams. Provide 3 actionable strategies.

WHO: Write for a VP of People Operations at a tech company. 
They've managed in-office teams successfully but are new 
to remote management.

CONTEXT: Their company went fully remote 6 months ago 
and employee engagement scores have dropped 15%.

QUALITY MARKERS:
- Include at least one specific example for each strategy
- Use a helpful, experienced tone (not preachy)
- End with a quick-win they can implement this week
- Avoid generic advice like "use video calls more"

Why It's Better: Every sentence in the output can be evaluated against clear criteria. The AI knows exactly what problem to solve and for whom.

Example 2: Analysis Request

Before (Generic Prompt):

Analyze this sales data and give me insights.

After (Specific Prompt):

Prompt
Analyze the attached sales data from Q3.

WHAT: Identify the 3 most actionable insights for 
improving Q4 performance.

WHO: I'm the sales director. I need to present to the 
executive team who care most about revenue growth and 
sales efficiency.

CONTEXT: We launched 2 new products in Q3. Our sales 
team grew from 10 to 15 reps. We're 8% under target 
for the year.

QUALITY MARKERS:
- Lead with the insight, then supporting data
- Quantify impact where possible ("This represents...")
- Include specific recommendations, not just observations
- Flag anything surprising that deserves investigation

[Data attached]

Why It's Better: The AI understands what story to tell, what the priorities are, and how to structure insights for the audience.

Example 3: Communication Request

Before (Generic Prompt):

Write a message to my team about the deadline change.

After (Specific Prompt):

Prompt
Write a Slack message to my engineering team about a 
deadline change.

WHAT: Announce that the launch date is moving from 
March 15 to April 1.

WHO: I'm the engineering manager. My team of 8 developers 
has been working hard to hit the original date.

CONTEXT: The delay is due to a legal review requirement 
we just learned about, not because of any problem with 
the team's work. I want to acknowledge their effort 
and prevent morale impact.

QUALITY MARKERS:
- Keep it under 150 words
- Be direct about the change (no burying the news)
- Acknowledge the effort they've already put in
- Frame the extra time positively
- Don't be overly apologetic or make promises about it 
  not happening again

Why It's Better: The output will hit the right tone, address the right concerns, and fit the communication medium.

Copy-Paste Prompts

The Specificity Template

Prompt
[Your request]

WHAT (Deliverable):
- Type: [exact format/content type]
- Length: [word count, number of items, etc.]
- Structure: [how it should be organized]

WHO (Perspective & Audience):
- Author/voice: [who is speaking]
- Audience: [who will read/use this]
- Audience needs: [what they care about]

CONTEXT (Situation):
- Background: [relevant history]
- Purpose: [why this is needed now]
- Constraints: [limitations to work within]

QUALITY MARKERS:
- Must include: [required elements]
- Must avoid: [things to exclude]
- Tone: [how it should feel]
- Success looks like: [what makes this good]

Quick Specificity Checklist

Prompt
Before submitting my prompt, have I specified:

□ What type of content I need?
□ What length or scope?
□ Who the audience is?
□ What context is relevant?
□ What must be included?
□ What should be avoided?
□ What tone or style?

Common Mistakes

Mistake: Over-specifying trivial details while under-specifying important ones ✅ Fix: Prioritize constraints that actually affect output quality

Mistake: Using specificity as a substitute for good examples ✅ Fix: When format matters, show an example alongside your specifications

Mistake: Making prompts so long they're confusing ✅ Fix: Be specific but organized—use clear sections and labels

Mistake: Specifying format but not purpose ✅ Fix: Always clarify why you need this and how it will be used

Mistake: Being specific about what you want but not what you don't want ✅ Fix: Include exclusions—what should the AI avoid doing?

When to Use This Technique

  • When you've gotten generic responses before
  • When output quality matters (client-facing, public, important)
  • When you need consistency across multiple similar requests
  • When working with specialized domains or audiences
  • When the task has clear success criteria

When NOT to Use This Technique

  • Early brainstorming when you want diverse ideas
  • When exploring a topic you don't know well yet
  • Quick, low-stakes questions where "good enough" works
  • Creative work where you want AI to surprise you

Advanced Variations

The Negative Constraint Stack

Sometimes specifying what NOT to do is more efficient:

Prompt
Write a product description for our new software.

DO NOT:
- Use buzzwords like "revolutionary" or "game-changing"
- Make claims we can't prove
- Mention competitors by name
- Use more than 100 words
- Include technical specifications

DO:
- Focus on time saved for the user
- Include a specific use case
- End with clear next step

The Comparison Anchor

Give AI something to be "like" or "unlike":

Prompt
Write an email similar to the example below in tone 
and structure, but for this different situation:

[Example email]

My situation: [your context]

The email should be like the example in: [specific 
qualities to match]

But different in: [specific qualities to change]

The Rubric Method

Define how you'll evaluate success:

Prompt
[Your request]

I will evaluate your response on:
1. Relevance to the specific audience (0-10)
2. Actionability of recommendations (0-10)
3. Appropriate length (0-10)
4. Correct tone (0-10)

A good response scores 8+ on all criteria.

Practice Exercise

Try this prompt and modify it for your needs:

Prompt
I need: [one sentence describing what you need]

Help me make this request more specific by asking me 
questions about:
- What exactly I need (format, length, structure)
- Who will use this and what they care about
- What context is relevant
- What good vs. bad output looks like

After I answer, rewrite my original request with full 
specificity.

This meta-approach helps you identify what specificity you're missing.

Key Takeaways

  • Vague prompts produce vague outputs—every time
  • Four levels of specificity: What, Who, Context, Quality Markers
  • Specificity isn't length—be precise, not verbose
  • Exclusions matter—tell AI what to avoid
  • Match specificity to stakes—high-importance tasks need high specificity
  • Use the framework until it becomes instinct

Conclusion

The Specificity Framework transforms how you interact with AI. Instead of hoping the AI reads your mind, you tell it exactly what you need. Instead of iterating through multiple attempts, you get it right the first time.

Start by asking yourself: "Could this prompt be answered multiple ways?" If yes, it needs more specificity. Add the What, the Who, the Context, and the Quality Markers until there's only one way to answer correctly.

The investment in prompt specificity pays for itself immediately. One well-crafted prompt beats five vague attempts—and the results speak for themselves.

Be specific. Get exactly what you asked for.

Sponsored Content

Interested in advertising? Reach automation professionals through our platform.

Share this article