Learning HubACFS Academy
Progress
0%
0 of 2020 remaining
  • 1
    Welcome & Overview5 min
  • 2
    Linux Navigation8 min
  • 3
    SSH & Persistence6 min
  • 4
    tmux Basics7 min
  • 5
    Git Essentials10 min
  • 6
    GitHub CLI8 min
  • 7
    Agent Commands10 min
  • 8
    NTM Command Center8 min
  • 9
    NTM Prompt Palette6 min
  • 10
    The Flywheel Loop10 min
  • 11
    Keeping Updated4 min
  • 12
    UBS: Code Quality Guardrails8 min
  • 13
    Agent Mail Coordination10 min
  • 14
    CASS: Learning from History8 min
  • 15
    The Memory System8 min
  • 16
    Beads: Issue Tracking8 min
  • 17
    Safety Tools: SLB & CAAM6 min
  • 18
    The Art of Agent Direction12 min
    NOW
  • 19
    Case Study: cass-memory15 min
  • 20
    Case Study: SLB12 min
Back to Home
Back
18/20
Lesson 18
12 min

The Art of Agent Direction

Prompting patterns that produce excellent results

New to ACFS?

Complete the setup wizard first to get the most from these lessons.

Go to Choose Your OS
Goal

Master the art of directing AI agents with precision and intention.

Why Prompting Matters

The difference between a mediocre agent session and a brilliant one often comes down to how you direct the agent. This lesson dissects the patterns that make prompts effective, drawn from real-world workflows that consistently produce excellent results.

Intensity Calibration

Signal how much attention to allocate

Scope Control

Expand or contract the search space

Metacognition

Force self-verification and reflection

Context Anchoring

Ground behavior in stable references

Pattern 1: Intensity Calibration

AI models allocate "compute" based on perceived task importance. Stacked modifiers signal that this task deserves maximum attention:

"super carefully"→Elevates attention above baseline
"super careful, methodical, and critical"→Triple-stacking for maximum precision
"systematically and meticulously and intelligently"→Emphasizes both process and quality
bash
# Low intensity (default behavior)
"Check the code for bugs"
# High intensity (elevated attention)
"Do a super careful, methodical, and critical check
with fresh eyes to find any obvious bugs, problems,
errors, issues, silly mistakes, etc. and then
systematically and meticulously and intelligently
correct them."
Note
These aren't filler words. They're calibration signals that tell the model to allocate more reasoning depth to the task.
Pro Tip
Claude Code feature: The word ultrathink is a specific Claude Code directive that tells the system to allocate significantly more thinking tokens. While it's a tool-level feature in Claude Code, using intensity words like "think deeply" or "reason carefully" can help other agents/models allocate more attention to complex tasks as well.

Pattern 2: Scope Control

Models tend to take shortcuts. Explicit scope directives push against premature narrowing:

↔ Breadth

  • "take ALL of that"
  • "Don't restrict yourself"
  • "cast a wider net"
  • "comprehensive and granular"

↓ Depth

  • "go super deep"
  • "deeply investigate and understand"
  • "trace their functionality and execution flows"
  • "first-principle analysis"
bash
# Avoiding narrow focus
"Don't restrict yourself to the latest commits,
cast a wider net and go super deep!"
# Comprehensive coverage
"Take ALL of that and elaborate on it more,
then create a comprehensive and granular set..."
# Depth with breadth
"Randomly explore the code files in this project,
choosing code files to deeply investigate and understand
and trace their functionality and execution flows
through the related code files which they import
or which they are imported by."

Pattern 3: Forcing Self-Verification

Questions trigger metacognition—forcing the model to evaluate its own output before finalizing:

?

"Are you sure it makes sense?"

→ Basic sanity check

?

"Is it optimal?"

→ Pushes beyond 'good enough'

?

"Could we change anything to make the system work better for users?"

→ User-centric optimization

?

"Check over each bead super carefully"

→ Item-by-item review

bash
# The Plan Review Pattern
"Check over each bead super carefully—
are you sure it makes sense?
Is it optimal?
Could we change anything to make the system work better?
If so, revise the beads.
It's a lot easier and faster to operate in 'plan space'
before we start implementing these things!"
Pro Tip
Plan Space Principle: Revising plans is 10x cheaper than debugging implementations. Force verification at the planning stage.

Pattern 4: The Fresh Eyes Technique

Psychological reset techniques help agents approach code without prior assumptions or confirmation bias:

Explicit Reset

with "fresh eyes"

↳ Signals to discard prior assumptions

Random Exploration

"randomly explore the code files"

↳ Avoids tunnel vision on expected locations

Peer Framing

"reviewing code written by your fellow agents"

↳ Creates psychological distance from own work

bash
# The Fresh Eyes Code Review
"I want you to carefully read over all of the new code
you just wrote and other existing code you just modified
with 'fresh eyes' looking super carefully for any obvious
bugs, errors, problems, issues, confusion, etc.
Carefully fix anything you uncover."
# Peer Review Framing
"Turn your attention to reviewing the code written by
your fellow agents and checking for any issues, bugs,
errors, problems, inefficiencies, security problems,
reliability issues, etc. and carefully diagnose their
underlying root causes using first-principle analysis."

Pattern 5: Temporal Awareness

Great prompts consider future contexts—the agent that will continue this work, the human who will review it, the "future self" who needs to understand it:

bash
# Self-Documenting Output
"Create a comprehensive set of beads with detailed comments
so that the whole thing is totally self-contained and
self-documenting (including relevant background,
reasoning/justification, considerations, etc.—
anything we'd want our 'future self' to know about
the goals and intentions and thought process and how it
serves the over-arching goals of the project)."
Future Self—Write as if explaining to someone with no context
Self-Contained—Output should work independently of current conversation
Over-Arching Goals—Connect current work to bigger picture

Pattern 6: Context Anchoring

Stable reference documents (like AGENTS.md) serve as behavioral anchors. Re-reading them is especially critical after context compaction.

bash
# The Post-Compaction Refresh
"Reread AGENTS.md so it's still fresh in your mind.
Use ultrathink."
Warning
Why this matters after compaction:

1. Context decay: Rules lose salience as more content is added
2. Summarization loss: Compaction may miss nuances
3. Drift prevention: Periodic grounding prevents behavioral divergence
4. Fresh frame: Re-reading establishes correct operating context
bash
# Grounding Throughout Work
"Be sure to comply with ALL rules in AGENTS.md and
ensure that any code you write or revise conforms to
the best practice guides referenced in the AGENTS.md file."
# Making Rules Explicit
"You may NOT delete any file or directory unless I
explicitly give the exact command in this session."

Pattern 7: First Principles Analysis

Push for deep understanding over surface-level pattern matching:

bash
# Root Cause Emphasis
"Carefully diagnose their underlying root causes
using first-principle analysis and then fix or
revise them if necessary."
# Context Before Action
"Once you understand the purpose of the code in
the larger context of the workflows, I want you
to do a super careful, methodical check..."
Understand Before Fixing—Trace execution flows and dependencies first
Root Cause Over Symptom—Diagnose underlying issues, not surface manifestations
Larger Context—Understand how code fits into overall workflows

Putting It All Together

Here's a real prompt that combines multiple patterns:

markdown
"Reread AGENTS.md so it's still fresh in your mind.
Use ultrathink.
I want you to sort of randomly explore the code files
in this project, choosing code files to deeply investigate
and understand and trace their functionality and execution
flows through the related code files which they import or
which they are imported by.
Once you understand the purpose of the code in the larger
context of the workflows, I want you to do a super careful,
methodical, and critical check with 'fresh eyes' to find
any obvious bugs, problems, errors, issues, silly mistakes,
etc. and then systematically and meticulously and
intelligently correct them.
Be sure to comply with ALL rules in AGENTS.md and ensure
that any code you write or revise conforms to the best
practice guides referenced in the AGENTS.md file."

Pattern Analysis

Anchoring←"Reread AGENTS.md..."
Intensity←"Use ultrathink"
Fresh Eyes←"randomly explore"
Scope (depth)←"deeply investigate and understand"
First Principles←"trace their functionality"
Context First←"Once you understand...larger context"
Intensity (stacked)←"super careful, methodical, and critical"
Fresh Eyes←"with "fresh eyes""
Scope (breadth)←"any obvious bugs, problems, errors, issues..."
Intensity (triple)←"systematically and meticulously and intelligently"
Anchoring←"comply with ALL rules"

Quick Reference

Pattern

Intensity

When

Tasks requiring maximum precision

Key Phrases

super carefully, methodical, use ultrathink

Pattern

Scope Expansion

When

Avoiding narrow focus or shortcuts

Key Phrases

take ALL, cast wider net, comprehensive

Pattern

Self-Verification

When

Before implementing or finalizing

Key Phrases

are you sure?, is it optimal?, revise if needed

Pattern

Fresh Eyes

When

Code review, finding missed issues

Key Phrases

fresh eyes, fellow agents, randomly explore

Pattern

Temporal

When

Creating persistent artifacts

Key Phrases

future self, self-documenting, self-contained

Pattern

Anchoring

When

After compaction or drift risk

Key Phrases

reread AGENTS.md, comply with ALL rules

Pattern

First Principles

When

Debugging or understanding complex code

Key Phrases

root causes, first-principle, larger context

Ready to level up?

Mark complete to track your learning progress.

Previous
Safety Tools: SLB & CAAM
Next
Case Study: cass-memory