How I work

The Predictable Delivery Operating System

A delivery operating layer with five components that work together. Each one addresses a specific gap between what leadership thinks is happening on projects and what's actually happening. Together they make delivery risks, scope drift, and PM inconsistency visible before they reach clients.

What this is

Delivery infrastructure, not a methodology rollout.

What it isn't

  • Not a PMO transformation.
    No new org structure, no reporting lines changed. I work inside the setup you already have.
  • Not generic Agile coaching.
    No framework certification, no Scrum ceremonies installed because they're "best practice." The process fits what your projects actually need.
  • Not a dashboard exercise.
    Tools don't fix the problem. The problem is usually that nobody has agreed on what the data should mean — or who's responsible for it when something goes wrong.

What it is

  • A weekly operating rhythm embedded in your existing delivery process
  • Five components that address specific, named gaps in delivery visibility
  • Built to work with the PMs and tools already in place — not to replace them
  • Calibrated to your portfolio in the first two weeks before anything becomes standard
The five layers

Each layer closes a specific gap in delivery visibility.

01
Portfolio visibility

All projects, one view, every week

Status, timeline confidence, budget pressure, and client sentiment — across every active project, in a consistent format. Leadership can compare health across engagements without translating between PM reporting styles.

02
Risk and escalation

Named risks with owners and escalation triggers

A weekly risk log that doesn't depend on individual PM judgment. Each risk has a named owner, a severity level, and a clear escalation trigger — so the question of "when does this go to leadership" has an answer that doesn't require a judgment call every time.

03
Scope and margin

Unapproved changes tracked as they happen

A lightweight check that runs alongside delivery — flagging scope additions before they're agreed informally and forgotten. Budget drift becomes visible in the same week it starts, not at invoice time.

04
PM discipline

Consistent reporting language across your PM team

PMs know how to report status, how to classify risk, and when to escalate — in the same terms, using the same structure. Regular check-ins keep the format calibrated across the team without turning into performance reviews.

05
Leadership control

Monthly summary of what needs a decision

A monthly leadership summary that separates what's running fine from what needs founder or CEO attention. Short enough to read in 10 minutes. Specific enough to act on. Not a status recap of things leadership already knows.

Example weekly rhythm

What a typical week looks like once the rhythm is running.

MON

PM status updates collected

Each PM submits their weekly update in the shared format — project health, budget position, current risks, client sentiment. Takes 20–30 minutes per PM when the format is clear.

TUE

Delivery review across all projects

I review all active projects — checking for gaps between what's reported and what I know from PM conversations, comparing risk exposure across engagements, flagging anything that needs attention before the week progresses.

WED

Risk log updated, owners confirmed

The central risk log gets updated with any new risks from the review. Existing risks are checked — owner still relevant, action still current, escalation trigger still appropriate. Anything stale gets flagged.

THU

PM check-in sessions

Working sessions with PMs on anything that came up in the review — unclear status signals, risks that need reframing, scope changes that need tracking. Async for routine weeks, live for anything urgent.

FRI

Weekly leadership summary prepared

A short summary of portfolio health goes to the founder or Head of Delivery. What changed this week, what's still at risk, what needs a decision before next week. Written for a reader who has 10 minutes, not 60.

What the artifacts look like

Four working documents, not a system of record.

These aren't templates. Each gets calibrated to your portfolio in the first two weeks. The format serves the review — not the other way around.

Artifact 01

Project health view

One page per project. Same four sections every week: status, timeline confidence, budget position, client sentiment. Red/amber/green is a starting point — the written assessment is what matters.

  • Overall status with brief written rationale
  • Timeline: on track, at risk, or slipped (with detail)
  • Budget: approved vs. actual vs. forecast
  • Client: last contact, sentiment, next touchpoint
Artifact 02

Risk review format

A live document updated weekly. Each row is one risk — what it is, how severe, who owns it, what the last action was, and what the escalation trigger looks like. When a risk drops off the log, there's a reason documented for why.

  • Risk description (specific, not "resource issues")
  • Severity: high / medium / watch
  • Owner (one named person)
  • Last action + escalation trigger if unresolved
Artifact 03

Scope and margin review checklist

A short checklist that runs alongside delivery each week. Forces the question: has anything changed in scope since last week, and if so, has it been approved? Small but consistent.

  • New scope items: approved / pending / rejected
  • Budget vs. actuals vs. forecast (this week)
  • Unapproved work flagged before it continues
  • Margin delta from prior week, if any
Artifact 04

Monthly leadership summary

Sent at the end of each month. Two pages at most. Designed for a founder or CEO who doesn't have time to read a full portfolio review but needs to know what's at risk and what decisions belong at their level.

  • Portfolio snapshot: projects by status
  • What changed this month (risks resolved / new)
  • Decisions needed from leadership before next month
  • What I'm watching but not escalating yet
How the system gets set up

The first two weeks calibrate everything to your portfolio.

None of the artifacts above are installed on day one. The first two weeks are a diagnostic — reviewing your current projects, PM setup, and delivery gaps. What gets built afterward reflects what your situation actually needs, not a generic template applied from the outside.

By end of week 2 you have findings, quick wins, and a clear read on whether to continue. If the diagnostic doesn't surface enough to justify an ongoing engagement, I'll say so.

Next step

Start with the diagnostic. See how this fits your setup before committing.

Two weeks. A review of your projects, PM team, and delivery gaps. At the end you have a risk map, PM observations, and specific quick wins — plus a recommendation on whether an ongoing engagement makes sense. If the findings aren't useful, you stop. No invoice for the rest of the month.