Mean time to resolve across 135,387 RMM alerts from 144 monitored sites, broken down by priority level with auto-resolve performance and industry benchmarks
Mean time to resolve across 135,387 RMM alerts from 144 monitored sites, broken down by priority level with auto-resolve performance and industry benchmarks
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service delivery managers, operations leads, and MSP owners tracking service quality
How often: Weekly for operational adjustments, monthly for client reporting, quarterly for contract reviews
Mean time to resolve across 135,387 RMM alerts from 144 monitored sites, broken down by priority level with auto-resolve performance and industry benchmarks
BI_Datto_Rmm_Alerts table in Proxuma Power BI. Resolution rate = resolved alerts / total alerts. MTTR is the mean of the autoresolve_mins column across all resolved alerts. All 3 DAX queries used in this report are shown in the sections below.
Alert count, resolution rate, and average auto-resolve time per priority level
| Metric | Value |
|---|---|
| Total Alerts | 135,387 |
| Resolved | 132,018 |
| Resolution Rate | 97.5% |
EVALUATE ROW("Total Alerts", COUNTROWS('BI_Datto_Rmm_Alerts'), "Resolved", COUNTROWS(FILTER('BI_Datto_Rmm_Alerts', 'BI_Datto_Rmm_Alerts'[resolved] = TRUE())), "Resolution Rate", DIVIDE(COUNTROWS(FILTER('BI_Datto_Rmm_Alerts', 'BI_Datto_Rmm_Alerts'[resolved] = TRUE())), COUNTROWS('BI_Datto_Rmm_Alerts')))
How effectively your monitoring policies handle alert resolution without manual intervention
EVALUATE
ROW(
"TotalAlerts", COUNTROWS('BI_Datto_Rmm_Alerts'),
"ResolvedAlerts", CALCULATE(COUNTROWS('BI_Datto_Rmm_Alerts'), 'BI_Datto_Rmm_Alerts'[resolved] = TRUE()),
"TotalSites", DISTINCTCOUNT('BI_Datto_Rmm_Alerts'[site_name]),
"AvgAutoResolve", AVERAGE('BI_Datto_Rmm_Alerts'[autoresolve_mins])
)
How your 135,387 alerts break down by priority classification
87.3% of all alerts are informational. That is typical for RMM environments where disk space warnings, patch notifications, and connectivity blips make up the bulk of the noise. The real question is whether the remaining 12.7% of non-informational alerts get resolved fast enough. Critical alerts at 2.8% of total volume (3,786 alerts) take nearly three times as long to resolve as informational ones.
Focused analysis of 5,253 critical and high-priority alerts that demand the fastest response
| Site | Resolved |
|---|---|
| Martin Group | 26,859 |
| Craig-Huynh | 8,801 |
| Thompson, Contreras and Rios | 7,248 |
| Wall PLC | 5,319 |
| Willis, Allen and Phillips | 5,018 |
Critical alerts average 14.60 minutes to resolve. That is nearly three times the 5.15-minute average for informational alerts. While the 98.7% resolution rate looks strong, 49 critical alerts remain open. Those 49 open critical alerts should be the first thing your NOC reviews tomorrow morning.
High-priority alerts have the lowest resolution rate at 95.2%, with 70 still open. That gap of 4.8% unresolved high alerts is the widest of any priority level. It suggests that high alerts are less likely to auto-resolve and may be falling through the cracks between automated policies and manual triage.
EVALUATE TOPN(10, GROUPBY(FILTER('BI_Datto_Rmm_Alerts', 'BI_Datto_Rmm_Alerts'[resolved] = TRUE()), 'BI_Datto_Rmm_Alerts'[site_name], "Resolved_Count", COUNTX(CURRENTGROUP(), 'BI_Datto_Rmm_Alerts'[alert_uid])), [Resolved_Count], DESC)
How your MTTR compares to industry standards for RMM alert resolution
| Benchmark | Threshold | Your Priority Levels | Verdict |
|---|---|---|---|
| Excellent | < 5 minutes | Moderate (4.91 min) | 1 of 5 levels |
| Good | 5 – 15 minutes | Information (5.15), Low (5.84), High (7.17) | 3 of 5 levels |
| Needs Attention | > 15 minutes | Critical (14.60 min — borderline) | At the threshold |
Only one priority level sits in the "Excellent" bracket: Moderate at 4.91 minutes. Three levels fall in the "Good" range between 5 and 15 minutes. Critical alerts at 14.60 minutes sit right at the boundary of the "Needs Attention" threshold. A small increase in critical alert volume or complexity could push that number past 15 minutes.
The gap between your fastest category (Moderate, 4.91 min) and your slowest (Critical, 14.60 min) is a 3x difference. That ratio is worth tracking month over month. If it widens, it means your critical alert handling processes are not keeping pace with alert volume growth.
With a 97.5% overall resolution rate and an average MTTR of 5.45 minutes, your monitoring policies are doing their job for the majority of alerts. Information and moderate alerts resolve faster than 5.2 minutes on average, meaning your automated responses are well-tuned for routine monitoring events. That is solid operational performance for an environment generating over 135,000 alerts.
At 14.60 minutes, critical alert MTTR is almost triple the 5.15-minute average for informational alerts. This is expected to some degree since critical alerts often require manual triage, but the 49 open critical alerts and the borderline benchmark position suggest room to improve. If your SLA target for critical alerts is under 15 minutes, you are cutting it close.
While all other priority levels resolve at 96.8% or better, high alerts lag behind at 95.2% with 70 still open. That is the widest unresolved gap in the data. High alerts may be falling into a middle ground where they are not critical enough for immediate escalation but not routine enough for auto-resolve policies. This gap needs a dedicated triage process.
4 priorities based on the findings above
Open critical alerts are a direct risk. Pull the list of 49 unresolved critical alerts, check which sites they belong to, and determine whether they represent genuine issues or stale alerts from resolved incidents that were not marked as closed. If they are real, escalate. If they are stale, clean them up and fix the auto-resolve policy that missed them.
The 95.2% resolution rate for high alerts is the weakest in your stack. Build a dedicated queue or escalation path for high-priority alerts that do not auto-resolve within 10 minutes. Assign a rotation of technicians to check the high-alert queue every hour during business hours. The goal is to bring that resolution rate above 98% and reduce the open count from 70 to single digits.
A 14.60-minute average for critical alerts is borderline. Break that number down further by site and by alert type. Are certain sites or certain alert categories driving the average up? If 80% of your critical alerts resolve in 5 minutes but a small subset takes 45+ minutes, you need to fix those outliers rather than optimize the whole pipeline.
Your current numbers are solid overall. The risk is that they degrade slowly without anyone noticing. Run this report monthly and track the critical MTTR trend. If it creeps past 15 minutes or if the high-alert open count starts climbing, you will catch it before it becomes a service delivery problem.
MTTR stands for Mean Time to Resolve. In this report, it measures the average number of minutes between an RMM alert being created and that alert being marked as resolved. This includes both auto-resolved alerts (handled by monitoring policies) and manually resolved alerts (closed by a technician).
The data comes from Datto RMM via the Proxuma Power BI connector. Alerts are synced into the BI_Datto_Rmm_Alerts table in the semantic model, which includes fields for priority, resolved status, site name, and auto-resolve time in minutes. The AI runs DAX queries against this table to generate the report.
Auto-resolve happens when a monitoring policy in Datto RMM detects that the condition that triggered the alert has cleared. For example, a disk space alert that fires at 90% usage and clears when usage drops below 85%. The autoresolve_mins field captures how long that process took. Manual resolution happens when a technician marks the alert as resolved in the RMM console.
Critical alerts often require human judgment. A server offline alert cannot auto-resolve until the server comes back up, which might depend on a technician rebooting it, replacing hardware, or restoring from backup. These steps take time. The 14.60-minute average for critical alerts reflects the complexity of the issues behind them, not necessarily a slow response.
For auto-resolved alerts, under 5 minutes is excellent. For critical alerts requiring manual intervention, most MSPs target under 15 minutes for acknowledgment and under 60 minutes for full resolution. Your overall average of 5.45 minutes is strong, but the 14.60-minute critical MTTR is worth monitoring to make sure it does not drift higher.
Yes. Add a filter on BI_Datto_Rmm_Alerts[site_name] to any of the DAX queries shown in this report. That will give you site-specific MTTR numbers, which are especially useful for QBRs or for investigating a site with a high alert volume.
Yes. Connect Proxuma Power BI to your Datto RMM account, add an AI tool (Claude, ChatGPT, or Copilot) via MCP, and ask the same question. The AI writes the DAX queries, runs them against your live alert data, and produces a report like this one in under fifteen minutes.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started