DPRI: A Lightweight Workflow for AI-Assisted Development

DPRI: A Lightweight Workflow for AI-Assisted Development

A few weeks ago I noticed something odd about my backup rotation. Every day, one file would flow from daily → weekly → monthly in a cascading count-based system. The result? My weekly backups had random weekdays instead of Sundays, and my monthly folder had six files for April alone.

The fix itself wasn’t complicated — a few dozen lines of Python with ISO week and month awareness. But the process of fixing it reminded me why I need a structured workflow when working with AI agents.

The Problem with Large Tasks

Here’s what usually happens: I describe a bug to O.C. (my OpenClaw agent). O.C. fixes it. I test it. We deploy it. Sounds simple, right?

Except when the task takes hours. When it touches multiple systems. When I need to step away and come back tomorrow. That’s when things get messy:

  • Context gets lost between sessions
  • Plans get made but never reviewed
  • Code gets written without tests
  • Deployments happen without verification

I’ve been running a herd of AI agents for months now. We’ve completed ARGs with 1,000+ emails, built email tools, monitoring systems, and games. But we’ve never had a structured pipeline that enforces the handoffs I actually want.

What is DPRI?

DPRI = Document → Plan → Review → Implement

It’s my shorthand for a lightweight SDLC inspired by traditional software development, but designed for small teams and AI-assisted workflows. The idea is simple: before you write code, document the problem. Before you implement, review the plan. After you build, verify before you deploy.

Each phase spawns an isolated agent session. Human approval is required only where it matters.

The Backup-Manager Fix: A DPRI Case Study

Let me walk through how this worked for the backup rotation bug:

1. Document

I described the problem to O.C. with evidence: log files showing the cascading behavior, directory listings confirming the duplicates. O.C. wrote a spec file documenting the problem, scope, and acceptance criteria.

Key question: What exactly is broken, and how do we know when it’s fixed?

2. Plan

O.C. broke the fix into four tasks: analyze current rotation, implement ISO-aware logic, add a --rotate-only flag, and write 22 unit tests. Each task got a size estimate and a model assignment.

Key question: What’s the smallest set of changes that fixes this without breaking anything else?

3. Review

Before writing code, a different agent reviewed the plan for edge cases. What happens on week 53? What about leap years? The review caught two issues: the original plan didn’t handle ISO week boundaries, and the --rotate-only flag needed a --dry-run companion.

Key question: Are we solving the right problem, or just a symptom?

4. Implement

A coding agent wrote the fix, the tests, and the documentation. It committed to a feature branch with descriptive messages. Tests passed on the first run (rare, but satisfying).

Key question: Does the code match the plan, and do the tests prove it works?

5. Verify (Peer Review)

A different agent reviewed the code for security and architecture issues. It approved with one suggestion: add a comment explaining why ISO week 1 can belong to the previous year.

Key question: Would I merge this if someone else wrote it?

6. Merge & Deploy

O.C. opened the PR, linked it to the issue, and summarized the changes. I reviewed and merged it. The next night’s backup run was clean.

Total time: About 3 hours across two days. Key insight: The structured phases prevented the “just fix it” rush that usually skips testing and review.

From Methodology to Skill

After the backup-manager fix, I realized something: OpenClaw has all the primitives for this pipeline, but no unified skill.

OpenClaw can:

  • Spawn isolated agent sessions (sessions_spawn)
  • Schedule recurring checks (cron)
  • Deliver notifications between phases (WhatsApp, email)
  • Persist state across sessions (files, git)

But there’s no skill that ties these together into a repeatable “start pipeline → run phases → human checkpoints → deploy” workflow.

So I built one.

How the DPRI Agent Workflow Skill Works

The skill is called dpri-agent-workflow. Here’s what happens when you say “start DPRI pipeline: fix the backup rotation bug”:

Phase 1: Document

  • Agent: Main session (O.C.)
  • Task: Parse request, research context, write spec
  • Output: docs/specs/2026-04-26-backup-rotation.md
  • Auto-advance: Yes

Phase 2: Plan

  • Agent: Isolated planning agent (reasoning-heavy model)
  • Task: Break into sub-tasks, estimate complexity, assign models
  • Output: docs/plans/2026-04-26-backup-rotation-plan.md
  • Checkpoint: Human reviews plan, replies “proceed” or “revise”

Phase 3: Review

  • Agent: Isolated review agent (different model for fresh eyes)
  • Task: Check for edge cases, security, architecture flaws
  • Output: docs/reviews/2026-04-26-backup-rotation-review.md
  • Auto-advance: If clean; loop back to Plan if issues found

Phase 4: Implement

  • Agent: Coding agent (code-focused model)
  • Task: Execute plan, write code, tests, docs
  • Output: Feature branch with commits
  • Retry logic: Max 2 retries on test failure

Phase 5: Verify

  • Agent: Review agent (different model from Implement)
  • Task: Code review, security audit
  • Output: Review feedback file
  • Auto-advance: If approved

Phase 6: Merge & Deploy

  • Agent: Main session (O.C.)
  • Task: Open PR, summarize changes
  • Checkpoint: Human approves and merges

The State Machine

Pipeline state is persisted in .dpri-state.json:

{
  "pipeline_id": "550e8400-e29b-41d4-a716-446655440000",
  "task": "Fix backup rotation bug",
  "current_phase": "plan",
  "phases": {
    "document": {"status": "complete", "output": "docs/specs/..."},
    "plan": {
      "status": "in_progress",
      "checkpoint": "pending_approval"
    }
  }
}

This means the pipeline survives restarts. If O.C. crashes mid-implementation, the state file knows exactly where to resume.

Checkpoint Gates

Two human checkpoints exist in the pipeline:

  1. After Plan — Review the plan before any code is written. You can say “proceed,” “review” to see details, or “revise [feedback]” to update the plan.

  2. After Verify — Approve the PR before merge. This is a hard gate for safety.

For trusted tasks, you can set auto_approve: ["plan"] to skip the first checkpoint. But the merge checkpoint is never auto-approved — that’s the safety boundary.

Model Rotation

One subtle feature: each phase uses a different model type:

PhaseModel TypeWhy
PlanReasoning-heavyNeeds to break down complex problems
ReviewDifferent from PlanFresh perspective catches blind spots
ImplementCode-focusedNeeds to write clean, tested code
VerifyDifferent from ImplementFresh eyes on the implementation

The “different model” rule is intentional. An agent that planned the work shouldn’t review its own plan. An agent that wrote the code shouldn’t verify its own code.

Why This Matters for Small Teams

Traditional SDLC assumes:

  • Dedicated QA engineers
  • Formal code review processes
  • Release managers
  • Change advisory boards

Small teams and AI-assisted workflows have none of these. But they still need:

  • Documentation before coding (prevents “what were we trying to do again?”)
  • Review before implementing (prevents building the wrong thing)
  • Verification before deploying (prevents “it worked on my machine”)

DPRI is my attempt to get 80% of SDLC value with 20% of the ceremony. No Jira tickets. No sprint planning. Just: document the problem, plan the fix, review the plan, implement with tests, verify before merge.

Try It

The skill is open-source and available at:

Repository: mostlycopypaste/dpri-agent-workflow-skill

Install it into your OpenClaw workspace, or just copy the scripts. The core is four Python tools:

  • init_pipeline.py — Start a new pipeline
  • advance_phase.py — Complete current phase, move to next
  • check_state.py — See where things stand
  • deliver_update.py — Send WhatsApp/email between phases

The Meta-Lesson

Building this skill was itself a DPRI pipeline:

  1. Document: I wrote a README explaining the problem and comparing Trycycle vs. our approach
  2. Plan: I broke the skill into SKILL.md, scripts, references, and templates
  3. Review: I re-read the skill-creator spec to make sure I was following best practices
  4. Implement: I wrote the code, tested it, fixed a bug in the terminal state handling
  5. Verify: I ran a full pipeline test end-to-end
  6. Merge: I packaged it and installed it

The pipeline that builds the pipeline. Very meta.

What’s Next

I’m using this for everything now. The next article? DPRI pipeline. The next bug fix? DPRI pipeline. The next feature? You get the idea.

The real test is whether it prevents the 2 AM “wait, did we test that?” moments. So far, so good.


Kevin Duane runs mostlycopyandpaste.com and works as an SRE at Disney. He manages a herd of AI agents and writes about what breaks.

Comments

Kevin Duane

Kevin Duane

Cloud architect and developer sharing practical solutions.