Engineering Metrics

How to Measure Engineering Velocity (Without Lying to Yourself)

Engineering velocity is how fast your team ships working code to production. Not how fast they type, not how many tickets they close. Real velocity is measured by tracking pull request cycle time, deployment frequency, and time to merge.

Why Most Velocity Metrics Are Wrong

You have probably tried:

Lines of Code (LOC)

Optimizes for volume. A developer who deletes 500 lines of legacy code is more valuable than one who adds 5,000 lines of boilerplate, but LOC metrics reward the latter.

Story Points / Velocity Charts

Jira velocity works until it does not. Teams game the system, story points drift over time, and you are measuring estimates, not delivery.

Commit Frequency

Frequent commits ≠ frequent shipping. You can commit 50 times a day and never merge a PR.

All of these share one problem: they measure activity, not outcomes.

Velocity is not how busy your team looks. It is how fast they deliver value.

The Core Metrics That Actually Work

1. Time to First Review

How long does a PR sit before anyone looks at it?

Good target

< 4 hours

Red flag

> 24 hours

If PRs wait 3 days for first review, your bottleneck is not coding—it is review capacity.

2. Time to Merge

From PR open to merge, how long does it take? This is your actual cycle time.

Elite

< 1 day

High

< 2 days

Red flag

> 5 days

3. Deployment Frequency (DORA)

How often are you shipping to production? This is the ultimate velocity metric.

Elite teamsMultiple times per day
High performersOnce per day to once per week
You should worryLess than once per month

What Good Velocity Looks Like (Real Examples)

Team A: Fast on Paper, Slow in Reality

  • • LOC per week: 15,000 (looks great!)
  • • PRs merged: 8
  • • Time to merge: 6.5 days median
  • • Deployment frequency: Once every 2 weeks

Diagnosis: Team is writing a lot of code, but it is sitting in review forever. Likely batching features.

Team B: Actual High Velocity

  • • LOC per week: 3,000 (looks slow!)
  • • PRs merged: 45
  • • Time to merge: 1.2 days median
  • • Deployment frequency: 3-4x per day

Diagnosis: Team ships small, incremental changes rapidly. This is real velocity.

Which team would you rather be on?

How to Actually Improve Velocity

If PRs Sit Too Long Before First Review

Problem: Review capacity bottleneck

Solutions:

  • Rotate "review on-call" duty
  • Set SLA for first review (4 hours)
  • Make reviewing part of performance evaluations
  • Reduce WIP limits

If Time to Merge Is High

Problem: Long review cycles or unclear requirements

Solutions:

  • Write better PR descriptions
  • Ship smaller PRs (< 250 lines)
  • Use draft PRs for early feedback
  • Align on design before implementing

If Deployment Frequency Is Low

Problem: Batching or deployment friction

Solutions:

  • Automate deployment (remove manual gates)
  • Use feature flags (deploy without releasing)
  • Ship daily, even if features are not done
  • Measure deployment attempts, not just successes

The PRPulse Approach

PRPulse tracks all of this automatically from your GitHub repos:

  • Time to first review, time to merge, review cycles
  • PR size distribution and trends over time
  • Deployment frequency via production branch tracking
  • Individual contributor patterns (who is reviewing? who is blocked?)
  • Team velocity trends (are we improving quarter over quarter?)

No Jira integration needed. No LOC counting. No manual tracking.

Just connect GitHub, select your repos, and see real velocity metrics within an hour.