Pricing Technology: ROI Math Without the Fluff
A plain math model for pricing technology work—hours are the wrong unit, risk and replacement cost are the right ones.
Pricing Technology: ROI Math Without the Fluff

Most technology pricing conversations go wrong from the first sentence.
"What's your hourly rate?" "How many hours will this take?" "Can we get more engineers to go faster?"
These are the wrong questions. They lead to the wrong conversations and the wrong decisions.
The right questions are about risk: What happens if this fails? What's the asset worth? What percentage of that value is at risk?
Hours are a byproduct. Risk is the foundation.
The Problem with Hourly Pricing
Hourly rates create perverse incentives on both sides.
For the buyer:
You're incentivized to minimize hours, which means minimizing scope, which means cutting corners. The cheapest option is to hire someone fast who does minimal work. That's rarely the best option.
For the seller:
You're incentivized to pad estimates, drag out work, and avoid efficiency. Every hour saved is money lost. The faster you work, the less you earn.
For the project:
Nobody is aligned around outcomes. The buyer wants fewer hours. The seller wants more hours. The project wants to succeed. These goals conflict.
The result: endless negotiation about scope, timeline creep, change order fights, and projects that technically "finish" but don't deliver value.
Start with the Asset, Not the Hours
Flip the conversation. Instead of "what will this cost?" ask "what is at risk?"
Asset value:
If the platform drives $X in annual revenue (or avoids $X in costs), that's the pool we're protecting or enhancing. A system processing $50M/year is worth protecting differently than a system processing $500K/year.
Replacement cost:
When an independent firm audited Conductor, they valued replacement cost at $20M–$35M. That's not revenue—that's what it would cost to rebuild from scratch, including the institutional knowledge, the battle-tested integrations, and the regulatory compliance we'd accumulated.
That number frames "expensive" differently. $200K to protect a $20M asset isn't expensive. It's insurance.
Risk lens:
The question isn't "what's the hourly rate?" It's "what percentage of the asset is at risk if we get this wrong?"
If architecture decisions affect 5% of a $50M revenue stream, you're protecting $2.5M. Spending $150K on getting those decisions right is rational, not premium.
Translate Risk into Dollars
Vague risks don't drive decisions. Quantified risks do.
Uptime risk:
A platform processing $100M/year does roughly $11K/hour in transactions. A 1% uptime hit during peak hours—say, 40 hours/year of degradation—threatens roughly $440K annually. Plus churn from unhappy customers. Plus manual rework costs. Plus reputation damage.
The math: 1% of annual revenue + churn risk + operational cost = actual risk exposure.
Security risk:
For regulated industries, breaches cost 4–7% of revenue in fines and litigation, plus customer churn, plus remediation. A $50M company facing a breach could see $2–5M in direct costs before counting lost customers.
The math: breach probability × (fines + litigation + churn + remediation) = security risk exposure.
Rewrite risk:
Failed rewrites are expensive. I've watched companies burn $2–6M and 12–24 months on rewrites that didn't ship. The original system was still running (barely), but the company lost two years of feature development and half their engineering team to frustration.
Spending $200–400K on architecture and guardrails to avoid a rewrite is rational, not premium. You're buying insurance against a multi-million-dollar risk.
Scope by Decisions, Not Tickets
The most valuable work isn't the most hours. It's the highest-leverage decisions.
High-leverage decisions:
These are the choices that compound:
- Data model: Get it wrong and every query is harder forever
- Integration contracts: Get it wrong and every vendor change is a crisis
- Deployment guardrails: Get it wrong and every release is Russian roulette
- Operability: Get it wrong and every incident is a fire drill
These decisions take relatively few hours. They determine the cost of everything else.
The pricing model I use:
-
Fixed-fee architecture phase: Define the high-leverage decisions, document them, validate them. Deliverables: architecture docs, integration contracts, deployment guardrails, operability standards. Tied to outcomes: SLOs defined, contracts documented, runbooks shipped.
-
Variable-rate build phase: Implementation work at lower rates, because the hard thinking is done. The architecture phase de-risked everything. Now we're executing.
Why this works:
Buyers see what they're getting: fewer rewrites, safer deploys, measurable reliability. They're not paying for hours of typing. They're paying for decisions that compound.
Cost of Delay
Value erodes while you wait. Quantify it.
The math:
If a feature saves 1 FTE-week per month across 12 reps, that's 12 FTE-weeks per month saved. If the feature is delayed 2 months, you've burned 24 FTE-weeks of potential savings—roughly $50–100K in productivity.
If a security fix prevents a breach risk, and you delay 6 months, you've carried that risk for 6 months. The expected cost isn't zero. It's probability × impact × time.
How to use it:
Prioritize by value erosion, not vibes. High cost-of-delay items get senior attention and fast timelines. Low cost-of-delay items can wait.
When a buyer says "we can't afford to start this quarter," compute the cost of delay. Sometimes waiting is fine. Sometimes waiting is more expensive than the project.
Evidence Beats Adjectives
"We're experienced" and "we deliver quality" mean nothing. Bring receipts.
What evidence looks like:
- Uptime history: "99.9% uptime over 20 years" with methodology
- Audit results: "$20M–$35M replacement cost from independent assessment"
- Renewal rates: "90%+ enterprise renewal rate over final 5 years"
- Incident data: "Mean time to detect dropped from hours to minutes"
Scenario comparison:
Don't just quote a price. Compare scenarios:
"Option A: $150K for architecture + ops hardening. Risk profile improves. Audit-ready. Expected incident rate drops 60%.
Option B: $80K for implementation only. Technical debt accumulates. Audit risk remains. Incident rate unchanged.
Option C: $0. Current trajectory continues. Rewrite likely within 2 years at $500K–2M."
The shift:
Conversations move from "why so expensive?" to "what risk are we mitigating?" The buyer isn't evaluating your hourly rate. They're evaluating their risk exposure.
Negotiation Framing
Every negotiation has three variables: scope, timeline, and budget. You can flex one, but not all three.
If budget is fixed:
Narrow scope to the highest-leverage decisions and guardrails. Don't pretend the same outcome happens for less money. Be explicit: "For this budget, we can deliver X but not Y. Y is valuable but requires more investment."
The wrong move: promising the same scope for less. You'll either cut corners or go over budget. Both damage trust.
If timeline is fixed:
Reduce scope to what's achievable, or increase staffing with the right mix. More engineers doesn't always mean faster—it depends on whether work can parallelize.
The wrong move: adding more junior engineers to a timeline-constrained project. Brooks's Law applies: "Adding manpower to a late software project makes it later." Add experienced people who can contribute immediately, or narrow scope.
If neither moves:
Pass on the project. Misaligned expectations cost more than they pay. You'll either damage your reputation by underdelivering, or lose money by overdelivering.
I've passed on projects worth $200K+ because the constraints were impossible. Every time, either the buyer came back with realistic constraints, or they found a vendor who overpromised and underdelivered. Both outcomes validated the decision.
The Quick ROI Model
Here's the template I use with founders and buyers:
Asset at risk:
$___ annual revenue or cost base that depends on this system
Risk if wrong:
___% probability of significant impact (uptime, security, compliance, churn)
Potential loss:
$___ = asset × risk percentage
Example: $50M revenue × 5% risk = $2.5M potential loss
Mitigation spend:
$___ for architecture, testing, ops hardening
The decision:
Is mitigation spend significantly less than potential loss? If yes, it's rational to spend. If no, adjust scope or accept the risk.
Common Objections and Responses
"That's more than we budgeted."
"What's the cost if this fails? Let's compare the investment against the risk."
If they haven't quantified the risk, help them. Often, once risk is quantified, the budget looks small.
"Our last vendor did it for less."
"What was the outcome? Did they deliver on reliability, or are you fixing their mistakes now?"
Often, the cheap option created the problems you're now being asked to solve. That's not a comparison—it's context.
"Can you match Competitor X's rate?"
"I can tell you what you get at each price point. At X's rate, you get [scope]. At our rate, you get [scope + reliability + guardrails]. Which risk profile do you want?"
Don't compete on price. Compete on outcomes and evidence.
"We don't have budget until next quarter."
"What's the cost of delay? If we start next quarter, we're carrying these risks for an additional 3 months. The expected cost of that delay is approximately $___, which may exceed the budget gap."
Sometimes waiting is fine. Sometimes waiting is more expensive than finding budget.
Context → Decision → Outcome → Metric
- Context: Consulting on high-stakes technology projects where failure meant multi-million-dollar losses, working with buyers who initially focused on hourly rates and headcount.
- Decision: Shifted pricing conversations from hours to risk, used asset valuation and scenario comparison instead of estimate negotiation.
- Outcome: Better alignment on scope and expectations. Projects delivered on value, not hours. Fewer scope fights. Higher client retention.
- Metric: Projects scoped this way had 90%+ delivery success vs. industry average of ~30–40% for technology projects. Client retention: 80%+ came back for additional work.
Anecdote: When the Math Changed the Conversation
A client balked at a mid-six-figure engagement for architecture and ops hardening.
"That's a lot of money for consulting," they said. They were used to thinking in hourly rates.
I walked them through the math:
Their platform processed tens of millions annually. We modeled three scenarios:
- Customer churn from reliability issues: seven figures annually
- Peak-season outage: six figures in lost transactions plus support costs plus reputation damage
- Failed audit delaying a major contract renewal: eight-figure contract at risk
Combined annual risk exposure: significantly more than the engagement cost if current trajectory continued.
The engagement included:
- Architecture review and remediation
- Operability improvements (monitoring, runbooks, incident response)
- Audit preparation and documentation
Expected outcome: 60% reduction in incident risk, audit-ready, measurable churn improvement.
They funded the engagement.
Six months later, the hardened platform passed an audit on the first attempt. They renewed the major contract. One of the audit findings noted "mature operational practices" as a differentiator.
The ROI was obvious in hindsight. The math made it obvious upfront.
They've since referred multiple clients. Not because we were cheap—we weren't. Because we delivered on risk reduction.
(Numbers generalized to protect client confidentiality—the math scales the same way regardless of the specific figures.)
Mini Checklist: Technology Pricing
- [ ] Asset value quantified (annual revenue or cost base at risk)
- [ ] Risk percentage estimated with rationale
- [ ] Potential loss calculated (asset × risk)
- [ ] Scenario comparison prepared (invest vs. don't invest)
- [ ] Evidence assembled (uptime history, audit results, renewal rates)
- [ ] Scope tied to outcomes, not hours
- [ ] High-leverage decisions identified (data model, contracts, guardrails)
- [ ] Cost of delay calculated for timeline negotiations
- [ ] Three-variable tradeoff clear (scope, timeline, budget)
- [ ] Walk-away point defined (constraints that don't work)