This report combines Microsoft 365 license data, Datto RMM alert volume, and Autotask ticket metrics into a single overhead score per client. The goal: identify which clients consume the most operational resources across all three systems and whether that overhead lines up with SLA performance. Three data sources, one question for the CFO: where is the real cost concentrated?
This report combines Microsoft 365 license data, Datto RMM alert volume, and Autotask ticket metrics into a single overhead score per client. The goal: identify which clients consume the most operational resources across all three systems and whether that overhead lines up with SLA performance. Three data sources, one question for the CFO: where is the real cost concentrated?
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Account managers, MSP owners, and service delivery leads
How often: Monthly for client reviews, quarterly for QBRs, on-demand when client signals change
This report combines Microsoft 365 license data, Datto RMM alert volume, and Autotask ticket metrics into a single overhead score per client. The goal: identify which clients consume the most operational resources across all three systems and whether that overhead lines up with SLA performance. Three data sources, one question for the CFO: where is the real cost concentrated?
Licenses dominate the raw numbers because they include free and trial SKUs in the M365 tenant pool. The real operational cost sits in the 135,387 RMM alerts and 67,521 tickets. Every alert that is not auto-resolved and every ticket that requires manual handling translates into technician time. At an average of 0.49 hours per ticket, the total ticket burden alone represents roughly 33,085 hours of labor.
EVALUATE ROW(
"TotalLicenses", [Total Licenses],
"LicenseUtil", [License Utilization %],
"ActiveUsers", [Active Users],
"TotalAlerts", COUNTROWS(BI_Datto_Rmm_Alerts),
"TotalTickets", [Tickets - Count - Created],
"OpenTickets", [Open Tickets (Current)],
"AvgHours", [Tickets - Avg Hours Per Ticket]
)
| Client | Licenses | Alerts | Tickets | Overhead Score |
|---|---|---|---|---|
| Client A | 20,403 | 3,838 | 5,290 | 29,531 |
| Client B | 0 | 26,873 | 2,775 | 29,648 |
| Client C | 0 | 9,307 | 5,458 | 14,765 |
| Client D | 0 | 7,430 | 1,803 | 9,233 |
| Client E | 0 | 2,033 | 6,381 | 8,414 |
| Client F | 1,513 | 4,086 | 2,180 | 7,779 |
| Client G | 0 | 5,032 | 2,376 | 7,408 |
| Client H | 0 | 3,437 | 1,758 | 5,195 |
| Client I | 0 | 2,646 | 1,002 | 3,648 |
Client B leads on alerts (26,873) while Client A leads on license count (20,403) and tickets (5,290). These two clients together account for roughly 59,179 overhead units, which is a significant concentration of operational resources in just two accounts.
Client E is an interesting outlier: low alert count (2,033) but the highest ticket volume at 6,381. This pattern suggests their issues arrive as tickets rather than automated alerts, pointing to either end-user-reported problems or a different monitoring setup for that account.
EVALUATE TOPN(10,
ADDCOLUMNS(
SUMMARIZECOLUMNS(
BI_Autotask_Companies[company_name],
"Licenses", [Total Licenses],
"Alerts", COUNTROWS(BI_Datto_Rmm_Alerts),
"Tickets", [Tickets - Count - Created]
),
"OverheadScore", [Licenses] + [Alerts] + [Tickets]
),
[OverheadScore], DESC
)
| Client | Licenses | Alerts | Tickets | Avg Hrs/Ticket | Open |
|---|---|---|---|---|---|
| Client A | 20,403 | 3,838 | 5,290 | 0.58 | 40 |
| Client B | 0 | 26,873 | 2,775 | 0.74 | 33 |
| Client C | 0 | 9,307 | 5,458 | 0.66 | 65 |
| Client E | 0 | 2,033 | 6,381 | 0.17 | 113 |
| Client G | 0 | 5,032 | 2,376 | 0.62 | 20 |
| Client F | 1,513 | 4,086 | 2,180 | 0.38 | 25 |
| Client D | 0 | 7,430 | 1,803 | 0.53 | 20 |
| Client H | 0 | 3,437 | 1,758 | 0.69 | 13 |
| Client J | 0 | 1,486 | 1,629 | 0.58 | 18 |
| Client K | 0 | 1,531 | 1,481 | 0.13 | 4 |
Client E stands out with 113 open tickets, the largest backlog in the dataset. Despite having only 2,033 alerts, their 6,381 total tickets at 0.17 hours each suggest a high volume of quick-touch issues that pile up fast. The low hours-per-ticket implies these are mostly routine tasks, but the open count means the team is not closing them at the rate they arrive.
Client B has the highest average hours per ticket at 0.74, meaning each ticket from that account takes roughly 44 minutes. Combined with 26,873 RMM alerts, this client represents a consistently heavy workload across both monitoring and service delivery.
| Company | Tickets | Time Entries | Hours |
|---|---|---|---|
| Rivers, Rogers and Mitchell | 6,381 | 2,970 | 1,662 |
| Craig-Huynh | 5,458 | 7,466 | 4,370 |
| Little Group | 5,290 | 6,176 | 3,791 |
| Martin Group | 2,775 | 3,065 | 2,217 |
| Wall PLC | 2,376 | 4,300 | 1,697 |
| Blanchard-Glenn | 2,364 | 47 | 9 |
| Price-Gomez | 2,180 | 2,340 | 865 |
| Thompson et al | 1,803 | 2,028 | 1,006 |
| Lewis LLC | 1,758 | 3,522 | 2,801 |
| Ramos Group | 1,728 | 1,892 | 1,171 |
Client E confirms the pattern from section 5.0. With the largest open backlog (113 tickets) and the worst SLA numbers (43.2% first response, 79.3% resolution), this client is both high-volume and underserved. Their overhead is not just a cost issue. It is a service delivery risk.
Client D and Client B both show first response rates below 76%. These are clients with high overhead scores and declining SLA performance, which means the operational burden is already affecting service quality.
EVALUATE TOPN(10, SUMMARIZECOLUMNS('BI_Autotask_Companies'[company_name], "Tickets", COUNTROWS('BI_Autotask_Tickets'), "TimeEntries", COUNTROWS('BI_Autotask_Time_Entries'), "HoursWorked", SUM('BI_Autotask_Time_Entries'[hours_worked])), [Tickets], DESC)
The composition reveals very different overhead profiles. Client B is almost entirely alert-driven (90.6% of their overhead comes from RMM), while Client E is ticket-driven (75.8% tickets). Client A is the only one with a meaningful license component, carrying 20,403 M365 licenses that make up 69% of their overhead.
This matters for resource planning. Alert-heavy clients need monitoring tuning and automation. Ticket-heavy clients need process improvements and capacity allocation. License-heavy clients need provisioning reviews.
This is the worst combination in the dataset: highest open ticket backlog and lowest SLA performance. The 0.17 hours per ticket suggests these are mostly quick tasks, so the problem is throughput capacity, not complexity. Adding a dedicated resource or running a backlog sprint would address the immediate risk.
This single client accounts for roughly 20% of all RMM alerts in the dataset. At 0.74 hours per ticket and a 73.7% first response rate, the alert volume is almost certainly contributing to SLA pressure. Reviewing alert thresholds and suppression rules for this account could significantly reduce noise.
Client A (29,531) and Client B (29,648) together represent a disproportionate share of the operational burden. Understanding whether these are your highest-revenue accounts is the next question. If they are, the overhead may be justified. If not, there is a margin problem to solve.
While first response rates vary widely (43% to 98%), resolution rates stay above 86% for 9 out of 10 top clients. The team is getting tickets resolved eventually, but the initial response time is where service quality drops. Improving triage speed and auto-assignment would close this gap.
The overhead score is the simple sum of a client's Microsoft 365 license count, Datto RMM alert count, and Autotask ticket count. It is not a financial calculation. It serves as a proxy for operational attention: the higher the number, the more resources that client consumes across provisioning, monitoring, and service delivery. The score is most useful for relative comparisons between clients, not as an absolute cost figure.
A zero in the license column means that client's Autotask company record is not yet linked to a Microsoft 365 tenant through the Bridge_All_Companies table. The client likely has M365 licenses, but the data connection has not been mapped. This is a data integration gap, not an indication that the client runs without licenses.
The measure [Tickets - Avg Hours Per Ticket] divides total billable and non-billable hours logged on tickets by the number of tickets created. A value of 0.49 means an average ticket takes about 29 minutes of logged time. This includes all ticket types: incidents, service requests, and change requests. Very low values (below 0.20) typically indicate automated or bulk-created tickets with minimal manual effort.
First response SLA is met when a technician sends the first communication to the client or updates the ticket status within the contracted response window. The percentage represents the share of tickets where this threshold was met. A rate below 70% means more than one in three tickets did not receive a timely first response, which directly affects client perception of support quality.
Start by categorizing alerts for the top client by type (disk, CPU, memory, offline, patch). Then identify which alert categories generate the most volume with the fewest ticket conversions. Those are your noise candidates. Common fixes include raising thresholds for disk space warnings, suppressing known-transient CPU spikes, and grouping related alerts into a single incident. Datto RMM supports alert suppression rules per site or device, so you can tune without affecting other clients.
Yes. All DAX queries in this report run against the live Power BI semantic model via the MCP server. A scheduled monthly run would regenerate the overhead scores with current data, letting you track whether operational burden shifts between clients over time. The generation process takes under 15 minutes, so a monthly cadence adds minimal overhead to your own operations.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started