Measure what matters: lead time, cycle time, and pipeline health

Measure what matters: lead time, cycle time, and pipeline health

⏱ Estimated reading time: 6 min

By Zain Ahmed

If your pipeline can’t be measured, it can’t be managed. Status meetings and vibes won’t save you. What moves teams is ruthless clarity about how long work takes from the minute it’s asked for to the minute it’s done, how much of that time is real work versus waiting, and whether the whole system is getting faster or just louder. Three ideas run the show: lead time, cycle time, and pipeline health. Nail those, and you stop arguing about opinions and start fixing flow.

Lead time is the clock that starts when a request enters your world and stops only when the outcome is delivered. A prospect fills a form, a tenant logs a maintenance issue, a project ticket is created—time begins. It doesn’t pause because someone went on leave, and it doesn’t end when you move a card to “review.” It ends when the customer would say, “I’ve got what I came for.” That definition matters because it forces you to measure reality, not internal milestones.

Cycle time is the subset of that clock where work is actively being processed. Think of it as hands-on time. If a claim sits for two days waiting for an approver and then takes forty minutes to validate and pay, your cycle time is forty minutes and your lead time is two days plus forty minutes. The ratio between the two exposes your real problem. When cycle time is a tiny slice of lead time, your issue isn’t skill; it’s waiting, batching, handoffs, and approvals that stall.

Pipeline health is the combination of speed, reliability, and predictability across your stages. Healthy pipelines move steadily with few surprises. Sick pipelines spike and stall; WIP balloons, aging tickets rot, and “on track” updates conceal the fact that nothing is actually finishing. You can feel an unhealthy pipeline in your bones: more check-ins, more rework, more chasing, more “just circling back.” Healthy pipelines don’t require a meeting to be legible.

Start by deciding exactly where each clock starts and stops. Pick definitions that a stranger would accept. For sales, the clock might start at qualified lead and stop at booked revenue, not “proposal sent.” For maintenance, it might start at request received and stop at job completed with confirmation, not “technician assigned.” Put those definitions into your tools as real fields and make them unambiguous. If two teams measure differently, your reports will be fiction.

Once the boundaries are real, collect timestamps at the source. Every movement between stages must create a durable record. That means no status changes in DMs, no “verbal approvals,” no offline spreadsheets that never sync. Use one system of record for the pipeline and write everything back to it. Measure on rolling windows so you see trend, not a one-off snapshot. Look at the last eight to twelve weeks and compare cohorts fairly: same type of work, same complexity, same stage path.

Interpreting the numbers is where most teams hallucinate. Start with medians, not averages; a few outliers will lie to you. Watch lead time and cycle time together, not in isolation. If both are dropping, you’re genuinely getting faster. If cycle time drops and lead time stays flat, you’re doing the work quicker when you touch it, but items are still rotting in queues. If lead time drops while cycle time climbs, you’re rushing hands-on work and likely raising error rates; expect rework to bite you in two weeks. Tie the numbers to outcomes. Shorter lead time should correlate with higher conversion, fewer cancellations, fewer complaints. If it doesn’t, you’ve gamed the metric and broken the service.

Aging work-in-progress is the silent killer. Items that linger past their expected age clog the pipe and starve new work. Healthy pipelines age evenly; unhealthy ones have a long tail of dusty items nobody wants to touch. Watch the age of each item in stage, not just total WIP. If something sits beyond its time budget, it should escalate automatically to a human with a name and a deadline. No “team” owners. Accountability must be sharp.

You don’t fix the numbers by yelling at them. You fix them by attacking the delays you can control. Remove handoffs by moving decisions closer to the work. Replace “waiting for info” loops with intake that refuses incomplete requests. Set WIP limits so nothing new starts until something finishes; the fastest way to double throughput is to stop starting. Kill approvals that exist only because “we’ve always done it,” and codify the approvals that matter with rules and SLAs. Automate the status updates and rollups so progress is visible without stealing people’s time. When you remove the need to remember to report, the system starts telling the truth.

Here’s how it looks in practice. A property team measured maintenance requests and discovered a median lead time of four days on “simple” jobs that should have taken one. Cycle time was under forty minutes. The problem wasn’t skill; it was waiting. They redefined intake so requests couldn’t be submitted without photos and required fields, added automatic contractor assignment by property and issue type, and set a twenty-four–hour escalation if the job hadn’t moved. They also added a one-sentence status that wrote back to the job record whenever a stage changed, and a weekly digest that posted itself. Within two weeks, lead time dropped to one day and ten hours; cycle time stayed the same; complaints fell off a cliff. They didn’t hire. They didn’t buy a new platform. They respected the clocks.

Another team in insurance measured claims and found a different pattern. Lead time was unstable week to week and cycle time had a fat tail. Root cause analysis showed rework after final review because policy data was inconsistent. They implemented a small validation step right after intake, used an extraction layer to standardize policy numbers and dates, and blocked anything that didn’t validate from entering the queue. Lead time variance collapsed, cycle time tightened, and payout accuracy improved. The win wasn’t a new dashboard; it was a better first mile.

Expect resistance. People will argue definitions, complain about “extra fields,” and insist their work can’t be measured because it’s special. That’s fine. Keep the definitions boring and public. Keep the measurement visible and automatic. Reward teams for finishing and for killing waste, not for starting more. When the clocks are fair and the data is consistent, the arguments dry up because reality is right there on the page.

If you want a rule of thumb to run the whole show: lead time is what customers feel, cycle time is what your team controls, and pipeline health is how safe it is to promise. Keep lead time short and stable, keep cycle time tight without heroics, and keep the pipe slim by limiting WIP and purging aged items. Do those three and your “capacity problem” will likely evaporate without a single headcount request.

You can get this moving in two weeks. Pick one pipeline. Write down clear start and finish lines in the system. Turn on the timestamps. Publish a simple view that shows median lead time, median cycle time, items past due, and the five oldest items with owners. Add automatic nudges at twenty-four and forty-eight hours without movement. Review the numbers weekly and change one thing at a time: intake quality, approval rules, WIP limits, assignment logic. Measure again. Repeat. It’s boring, and that’s why it works.

If you’re stuck defining the clocks or your pipeline spans too many tools, don’t guess. Run a short Workflow Audit. We’ll map where time really goes, set fair start and finish lines, instrument the pipeline you already have, and hand you a ninety–day plan to shrink lead time without burning the team. No theater. Just flow.