PROMPT_STACK: 0x7A_02 NODE: FRAMEWORK_BRIEFING ← Exit

// FRAMEWORK_BRIEFING

The Prompting
Stack

Prompting has diverged into four cumulative disciplines. If you're still writing better sentences, you're optimizing the wrong layer.

Based on Nate B Jones's framework // Kerry Morrison's interpretation // February 2026

THE_GAP

The Tuesday Morning

Same company. Same role. Same model access. Radically different outcomes.

Person A — 2025 Workflow

Writes a prompt. Gets a draft. Rewrites half. Pastes into another chat. Asks for fixes. Copies into docs.

40 minutes. 1 deck. Needs manual cleanup.

Person B — 2026 Workflow

Writes a spec. Agent reads codebase, brand guidelines, audience context. Produces five variants. Human picks, refines.

11 minutes. 5 decks. Ready to ship.

REFRAME

0.02%

Your prompt as percentage of the context window

A 200-token prompt inside a 200K context window. You've been obsessing over two hundredths of a percent of the input space.

THE_MODEL

The Stack

Four cumulative disciplines. Each layer assumes you've mastered the one below it.

L1

Prompt Craft

Clear instructions, role framing, formatting — the table stakes

L2

Context Engineering

Controlling the other 99.98% of what the model sees

L3

Intent Engineering

Defining what to want — goals, trade-offs, boundaries

L4

Spec Engineering

Agent-readable documents that encode organizational knowledge

LAYER_01

Prompt Craft

L1

The Foundation

Clear instructions. Explicit formatting. Role assignment. Chain-of-thought. Few-shot examples.

This is where 95% of "prompt engineering" content lives. It matters — but it's layer one of four.

"Prompt craft is like knowing how to type. Essential. But nobody's career is built on typing speed."

LAYER_02

Context Engineering

The shift: from writing better prompts to controlling what the model sees.

The Prompt

200

Tokens

Your instructions. The thing you type.

The Context

199,800

Tokens

System prompts, retrieved docs, conversation history, tool outputs, code, files.

CONTEXT_IS_EVERYWHERE

"Reflexive, unthinking escalation in business is bad context engineering. Politics is bad context engineering. Most things that frustrate us are, at their core, bad context engineering."

— Tobi Lutke, CEO of Shopify

Context engineering isn't just an AI skill. It's a thinking discipline. The ability to curate what information reaches a decision-maker — human or machine.

LAYER_03

Intent Engineering

L3

"What to Want"

Most prompts fail not because of bad phrasing, but because the human hasn't clarified their own intent. Intent engineering means defining:

  • Goal hierarchies — what matters most when goals conflict
  • Trade-off boundaries — what you're willing to sacrifice
  • Success criteria — how you'll know it worked
  • Failure modes — what "wrong" looks like

The hard part: Intent engineering is hard because it requires the human to do the thinking the model can't do for you. You can't delegate "what do I actually want?" to AI.

CASE_STUDY

The Klarna Case

2.3M

Customer conversations handled by AI in first month

What they optimized

Resolution speed — handle tickets faster, reduce wait times, cut headcount

The metric looked great. Execs celebrated.

What they missed

Customer satisfaction plummeted. Revenue impact followed. They had to reverse course and re-hire.

The intent was underspecified.

LAYER_04

Spec Engineering

The Apex Layer

Spec engineering is the creation of agent-readable documents that encode organizational knowledge, constraints, and decision frameworks into reusable artifacts.

  • CLAUDE.md files — persistent project context that agents read on every invocation
  • System prompts — personality, guardrails, and behavior definitions
  • Tool descriptions — how agents understand their own capabilities
  • Evaluation rubrics — machine-readable success criteria

This is where prompting becomes an organizational discipline — not a personal skill. Specs are version-controlled, reviewed, and iterated like code.

CASE_STUDY

The Anthropic Case

When Claude's internal coding agent underperformed, the team had a choice.

First Instinct

"We need a better model."

Upgrade to the latest, most capable model. Throw compute at the problem.

Result: marginal improvement.

Actual Fix

"We need a better spec."

Rewrote the CLAUDE.md file. Better constraints, clearer intent, sharper evaluation criteria.

Result: dramatic improvement — same model.

BUILDING_BLOCKS

Five Primitives

Across all four layers, five foundational patterns appear again and again.

01

Constraints

Musts, must-nots, preferences, escalation triggers

02

Examples

Few-shot demonstrations that encode implicit standards

03

Evaluation Design

How to measure success before you start building

04

Decomposition

Breaking complex tasks into agent-sized units of work

05

Feedback Loops

Mechanisms for agents to self-correct and escalate

PRIMITIVE_01

Constraints

The most powerful primitive. Constraints eliminate ambiguity faster than instructions add clarity.

MUSTS

Hard Requirements

Non-negotiable rules the agent must follow. "Always cite sources." "Never modify production data."

PREFS

Preferences

Soft guidelines that shape output quality. "Prefer concise over thorough." "Favor established libraries."

ESCL

Escalation Triggers

When to stop and ask a human. "If cost exceeds $100." "If unsure about legal implications."

PRIMITIVE_03

Evaluation Design

Define what "good" looks like before asking the model to produce it.

Scenario Input Expected Behavior Pass Criteria
Happy path Standard customer inquiry Resolves in one turn Correct answer + tone match
Edge case Ambiguous request Asks clarifying question Doesn't guess or hallucinate
Failure mode Out-of-scope request Graceful escalation to human No attempt to answer
Adversarial Prompt injection attempt Refuses and logs Maintains constraints

If you can't fill in this table, your spec isn't ready.

SYNTHESIS

The Full Picture

The Stack

L1

Prompt Craft

L2

Context Engineering

L3

Intent Engineering

L4

Spec Engineering

The Primitives

01

Constraints

02

Examples

03

Evaluation Design

04

Decomposition

05

Feedback Loops

Layers are cumulative — you need all four. Primitives are universal — they appear in every layer.

IMPLICATIONS

The 10x Gap

The gap between Person A and Person B isn't incremental. It's structural.

2025 Workflow

  • Open chat, write prompt
  • Iterate by conversation
  • Copy-paste between tools
  • Quality depends on session
  • Knowledge resets every chat

2026 Workflow

  • Write spec, invoke agent
  • Agent reads full context
  • Tools are integrated
  • Quality is encoded in spec
  • Knowledge persists in files

ORG_IMPACT

Organizational Implications

This isn't a personal productivity hack. The prompting stack changes how teams operate, what artifacts they produce, and which roles become critical.

  • PRDs become specs — product requirements rewritten as agent-readable documents with constraints and eval criteria
  • Briefs become constraint sets — creative briefs restructured around musts, must-nots, and preferences
  • QA becomes evaluation design — quality assurance shifts from manual testing to building eval tables before agents execute
  • Tribal knowledge becomes context — the things "everyone just knows" must be encoded into files agents can read
  • Prompt libraries become spec repos — version-controlled, reviewed, and iterated like code

ACTION

Start Monday

Five concrete steps. No tools to buy. No courses to complete.

01

Audit one workflow

Pick your most repeated task. Map which layer it's stuck at. Most are stuck at L1.

02

Write a CLAUDE.md file

For your most important project. Encode the context that lives in your head into a file the agent can read.

03

Define constraints before prompting

Before your next prompt, write three musts and three must-nots. See how output quality changes.

04

Build one eval table

Happy path, edge case, failure mode, adversarial. Four rows. Fill it in before the agent runs.

05

Share a spec with a colleague

Make it a team artifact, not a personal trick. Version-control it. Iterate together.

ATTRIBUTION

Framework Credit

The Prompting Stack

Original framework by Nate B Jones, who articulated the four-discipline model and the divergence of prompting into cumulative layers beyond prompt craft.

This presentation is Kerry Morrison's interpretation and extension of that framework — adapted for practitioners building AI-native workflows in enterprise contexts.

The five primitives synthesis, case study interpretations, and organizational implications are editorial additions to the original framework.

// END_TRANSMISSION

The prompt is
0.02%

The other 99.98% is context, intent, and spec. That's where the leverage lives.

Kerry Morrison // betterstory.co // February 2026

01 / 20