I've seen OKRs done badly many times. Big aspirational objectives with key results that are either unmeasurable or trivially achievable. Quarterly reviews where the team reports green on everything and nothing changed. The process that's supposed to create alignment and accountability producing neither.
The version we're running at Wasteer is different. 97.6% of committed key results completed last quarter. Here's what's different about it.
The design principles we started with
Key results have to be binary or numerical. Not "improve documentation quality" but "documentation coverage for all production services above 80% as measured by our coverage tool." Not "better incident response" but "MTTR below 45 minutes for P1 incidents." If you can't tell at the end of the quarter whether it was achieved, it's not a key result. It's an intention.
Fewer objectives. We run three objectives per team per quarter. Not five, not seven. Three. The discipline of choosing three forces real prioritization. If everything is an objective, nothing is. The things that don't make the list are documented as out-of-scope decisions, not forgotten.
Key results are commitments, not hopes. The team signs off on key results before the quarter starts. Not aspirational targets that nobody really expects to hit. Commitments that the team believes are achievable with the resources available. If the leadership team wants a stretch target, it goes in a separate column, clearly labeled as a stretch, and doesn't count against completion rate.
The accountability mechanics
Weekly check-ins are five minutes, not an hour. Each key result gets a confidence score (1-5) and a one-line status update. The format is fixed. There's no room for lengthy explanations or status theater. If a key result is at confidence 2, the conversation about why and what to do is a separate meeting with the people responsible.
Blockers are surfaced immediately. We have a norm that a key result dropping to confidence 2 or below triggers an immediate flag to the team lead, not waiting until the weekly check-in. The earlier a blocker is visible, the more options there are for resolving it.
End-of-quarter retrospective on the OKRs themselves. Not just on delivery. On whether the objectives were the right objectives. Key results that were consistently at confidence 5 from week one were probably not ambitious enough. Objectives that got completed but didn't move anything that mattered were probably wrong objectives. This retrospective feeds the next quarter's planning.
Why 97.6% is the right number
100% would mean the key results weren't ambitious enough. 80% would mean the commitments were aspirational rather than real. 97.6% means we're calibrating correctly: the key results are achievable with focused effort and occasionally one slips because something unexpected happened, which is normal.
The completion rate is a calibration metric, not a performance metric. The goal isn't to hit 97.6%. The goal is to build objectives that matter and complete them. The completion rate tells you whether your planning process is honest.
What I've seen fail
OKRs where the objectives are set by leadership and handed to the team. Ownership requires authorship. If the team didn't write the key results, the team doesn't own them.
OKRs used as performance reviews. Key results are team commitments, not individual performance assessments. Mixing them with performance management produces key results that are designed to look achievable, not to drive the right behavior.
OKRs without the retrospective. The planning process improves every quarter if you study what went wrong and why. Without the retrospective, you repeat the same calibration errors indefinitely.
With gusto, Fatih.