A cross-source analysis of 135,387 RMM alerts and 67,521 service tickets across 10 clients. This report maps alert noise per client against actual ticket volume to expose which environments generate disproportionate monitoring overhead relative to real support demand.
A cross-source analysis of 135,387 RMM alerts and 67,521 service tickets across 10 clients. This report maps alert noise per client against actual ticket volume to expose which environments generate disproportionate monitoring overhead relative to real support demand.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service desk managers, dispatch leads, and operations teams
How often: Daily for queue management, weekly for trend analysis, monthly for capacity planning
A cross-source analysis of 135,387 RMM alerts and 67,521 service tickets across 10 clients. This report maps alert noise per client against actual ticket volume to expose which environments generate disproportionate monitoring overhead relative to real support demand.
Top-level volume metrics across all monitored clients in the last 12 months.
Side-by-side comparison of RMM alert volume (amber) and service ticket volume (blue) for the top 10 clients by total alerts.
EVALUATE
TOPN(
10,
ADDCOLUMNS(
SUMMARIZE(
Bridge_All_Companies,
Bridge_All_Companies[company_id]
),
"CompName", CALCULATE(MAX('BI_Autotask_Companies'[company_name])),
"Alerts", CALCULATE(COUNTROWS('BI_Datto_Rmm_Alerts')),
"AutoRes", CALCULATE(
COUNTROWS('BI_Datto_Rmm_Alerts'),
'BI_Datto_Rmm_Alerts'[autoresolve_mins] > 0
),
"Tickets", [Tickets - Count - Created]
),
[Alerts], DESC
)
Clients ranked by their alert-to-ticket ratio. A ratio above 3.0 signals a noisy RMM environment. Below 1.0 means the client generates more tickets than alerts -- a sign of clean monitoring but complex support needs.
EVALUATE
ROW(
"TotalAlerts", COUNTROWS('BI_Datto_Rmm_Alerts'),
"AutoResolved", CALCULATE(
COUNTROWS('BI_Datto_Rmm_Alerts'),
'BI_Datto_Rmm_Alerts'[autoresolve_mins] > 0
),
"TotalTickets", [Tickets - Count - Created]
)
How the 10 clients break down by alert-to-ticket ratio category. Three clients sit in the high-noise bracket (ratio above 3.0), generating outsized monitoring load.
Mapping the relationship between alert noise and actual support workload. The table below highlights the mismatch between monitoring overhead and ticket demand.
| Client | Alerts | Tickets | Ratio | Classification |
|---|---|---|---|---|
| Client A | 26,873 | 2,775 | 9.7:1 | High Noise |
| Client H | 2,920 | 682 | 4.3:1 | High Noise |
| Client C | 7,430 | 1,803 | 4.1:1 | High Noise |
| Client I | 2,646 | 1,002 | 2.6:1 | Moderate |
| Client D | 5,032 | 2,376 | 2.1:1 | Moderate |
| Client G | 3,437 | 1,758 | 2.0:1 | Balanced |
| Client E | 4,086 | 2,180 | 1.9:1 | Balanced |
| Client B | 9,307 | 5,458 | 1.7:1 | Balanced |
| Client F | 3,838 | 5,290 | 0.7:1 | Ticket-Heavy |
| Client J | 2,033 | 6,381 | 0.3:1 | Ticket-Heavy |
EVALUATE
TOPN(
10,
ADDCOLUMNS(
SUMMARIZE(
Bridge_All_Companies,
Bridge_All_Companies[company_id]
),
"CompName", CALCULATE(MAX('BI_Autotask_Companies'[company_name])),
"Alerts", CALCULATE(COUNTROWS('BI_Datto_Rmm_Alerts')),
"AutoRes", CALCULATE(
COUNTROWS('BI_Datto_Rmm_Alerts'),
'BI_Datto_Rmm_Alerts'[autoresolve_mins] > 0
),
"Tickets", [Tickets - Count - Created]
),
[Alerts], DESC
)
-- Ratio calculated as: [Alerts] / [Tickets]
-- Classification thresholds:
-- > 3.0 = High Noise
-- 2.0-3.0 = Moderate
-- 1.0-2.0 = Balanced
-- < 1.0 = Ticket-Heavy
With a 9.7:1 ratio, Client A accounts for 20% of all alerts but only 4% of tickets. This environment is producing significant monitoring noise that does not translate into real support work. The RMM alert policies for this client need a full review.
Clients A, H, and C collectively produce 56% of all alerts but only 18% of tickets. That imbalance means your monitoring tools are creating work that does not exist. Each of these clients likely has overly sensitive alert thresholds or hardware generating repetitive, non-actionable alerts.
Client J generates 3x more tickets than alerts (0.3:1 ratio), and Client F sits at 0.7:1. These clients have well-tuned monitoring but complex support needs. The ticket volume here comes from user requests and business processes, not from alert escalation. This is the right pattern -- your monitoring is doing its job without adding noise.
Every alert has an autoresolve_mins value greater than zero, meaning none of them required manual intervention. This confirms the alerts are transient. The question is not whether they resolve on their own, but whether generating them at all adds value or just creates dashboard noise.
The core question this report answers is straightforward: how much of your RMM monitoring output turns into actual work? The short answer is not much. Across 135,387 alerts, every single one auto-resolved. None of these alerts escalated into a ticket through the monitoring pipeline. Tickets come from a separate channel entirely -- user requests, scheduled maintenance, and business-driven support needs.
That does not mean RMM alerts are useless. Auto-resolving alerts serve as a health pulse. They confirm that transient conditions (disk spikes, brief connectivity drops, service restarts) recover without intervention. The problem starts when certain clients generate disproportionate volumes. Client A alone produces 26,873 alerts per year -- that is 73 alerts per day, every day. Even if they all auto-resolve, they still consume dashboard real estate, inflate reporting numbers, and make it harder to spot real issues when they surface.
The alert-to-ticket ratio is the most useful metric here. A ratio between 1.0 and 2.0 suggests a healthy balance: the monitoring is active, and the support workload roughly matches the environment's complexity. Once you get above 3.0, the monitoring is producing noise. Client A at 9.7:1 is the extreme case, but Client H (4.3:1) and Client C (4.1:1) also deserve attention.
On the other side, Clients F and J tell a different story. Their ratios below 1.0 mean that tickets outnumber alerts. This is not a bad thing. It means their RMM is well-tuned (few false positives) and their support needs come from business operations rather than infrastructure instability. These are the clients where your technicians spend time on projects and user requests, not chasing phantom alerts.
The practical takeaway: tightening alert thresholds for three clients could cut your total alert volume by more than half without affecting ticket resolution or response quality. That is a meaningful reduction in monitoring noise for zero cost.
Concrete steps to reduce alert noise and improve the signal-to-work ratio across your client base.
Pull the top 5 alert types by volume for Client A. Identify which monitors fire the most and check their thresholds. Disk space warnings at 80% on servers with 2TB drives, for example, generate thousands of alerts that never matter. Raise thresholds or switch to daily summary alerts instead of real-time triggers. Target: cut Client A's alert volume by 70% within 30 days.
Both sit above the 4:1 ratio threshold. Apply the same audit approach: identify the top 3 alert types per client and adjust thresholds. These two clients together account for over 10,000 alerts per year. Bringing them below a 2:1 ratio would remove roughly 5,000 unnecessary alerts annually.
These two clients show what a well-tuned RMM configuration looks like. Their alert-to-ticket ratios sit below 1.0, meaning the monitoring stays quiet unless something actually needs attention. Document their alert policies and threshold settings as a template for other clients. When onboarding new clients, start with these settings rather than the default alert package.
An auto-resolved alert is one where the condition that triggered it cleared on its own without manual intervention. The autoresolve_mins field in Datto RMM records how long the alert was active before it self-cleared. A disk space warning that fires at 85% and clears when temp files get purged is a typical example.
Tickets come from multiple channels: user requests, scheduled work, email-to-ticket pipelines, and phone calls. Alerts only come from RMM monitoring. A client with more tickets than alerts simply has higher user-driven support demand and well-tuned monitoring that does not fire unnecessarily.
There is no universal standard, but based on MSP benchmarks, a ratio between 1.0 and 2.0 is healthy. Below 1.0 means clean monitoring with user-driven ticket volume. Above 3.0 typically indicates alert threshold tuning is needed. Above 5.0 is a red flag that should be addressed immediately.
Not if done correctly. The goal is to remove noise, not disable monitoring. Raising a disk space threshold from 80% to 90% still catches real issues while eliminating thousands of false positives. Switching high-frequency alerts to daily summary reports also preserves visibility without flooding dashboards.
The Bridge_All_Companies table links both data sources by company_id. Alerts come from BI_Datto_Rmm_Alerts and tickets from the Autotask ticket measures. The DAX queries join these at the client level to calculate per-client volumes and ratios.
Yes. Copy any query from the toggles above and paste it into DAX Studio or the Power BI Desktop performance analyzer. The queries reference standard Proxuma data model tables and measures that exist in every Proxuma Power BI deployment.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started