Latest from our blog
Discover insights, updates, and helpful content.
Descriptive vs. diagnostic vs. predictive vs. prescriptive analytics — what’s the difference?
Descriptive = what happened. Summarizes past data (totals, averages, trends, counts).
Diagnostic = why it happened. Finds drivers and root causes by segmenting and comparing.
Predictive = what will likely happen next. Uses historical patterns to forecast probabilities or volumes.
Prescriptive = what we should do about it. Recommends actions (with expected impact) under real-world constraints.
When to use each: start with descriptive to get a baseline, go diagnostic when a metric moves, add predictive to plan ahead, and use prescriptive to choose (or automate) the best next step.
Simple example:
Descriptive: “Returns rose to 7% last month.”
Diagnostic: “Mostly SKU A via courier X after weekend shipments.”
Predictive: “Without changes, returns will hit 8–9% next month.”
Prescriptive: “Switch courier for SKU A on weekends; add packaging check — expected returns back to ~5%.”
If you want this working in your org, map one metric end-to-end and wire the prescription into a workflow so actions actually happen.
Goal: take one business metric from “we report it” to “we act on it automatically.”
Choose one metric with real cost (e.g., return rate, lead response time, claim cycle time).
Write the target (e.g., “< 5% returns” or “< 2h first response”).
One source of truth.
Clean IDs, dedupe, fix missing fields.
Publish a simple trend + breakdown (by product/channel/owner).
Output: a small dashboard your team actually opens.
Segment the metric by 3–5 dimensions.
Rank contributors by impact (“80% of variance is X + Y”).
Validate with a before/after or holdout period.
Output: top 2–3 causes with evidence.
Start simple (baseline + seasonality or a one-feature model).
Show a range, not just a point (e.g., 7–9%).
Track error (MAPE) so people trust it.
Output: next-week forecast with accuracy notes.
Convert findings into rules under constraints:
“If SKU A + courier X + weekend → switch to Y.”
“If lead > 2h unopened → auto-assign + notify manager.”
Estimate expected lift (based on your diagnostics).
Output: a short playbook (if → then, owner, SLA).
Trigger: condition met.
Action: create/assign task, update system, send message.
Feedback: log decision, track outcomes, escalate on SLA.
Output: actions happen without chasing; results are measurable.
The four types of analytics aren’t competing philosophies—they’re a ladder:
Descriptive (what happened) → Diagnostic (why) → Predictive (what’s likely) → Prescriptive (what to do).
Climb it for one metric, prove impact, then repeat. Keep the stack boring and reliable: a clean data source, a simple dashboard, a lightweight forecast, and a workflow that turns recommendations into action. If you can’t explain the rule, don’t automate it. If you can’t log it, you didn’t do it.
Do this and your analytics stop being slides—they become systems that drive behavior.
Discover insights, updates, and helpful content.