How much of your ticket volume is generated by RMM monitoring vs manual intake, and what difference does it make for resolution speed and SLA compliance. Generated by AI via Proxuma Power BI MCP server.
How much of your ticket volume is generated by RMM monitoring vs manual intake, and what difference does it make for resolution speed and SLA compliance. Generated by AI via Proxuma Power BI MCP server.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service desk managers, dispatch leads, and operations teams
How often: Daily for queue management, weekly for trend analysis, monthly for capacity planning
How much of your ticket volume is generated by RMM monitoring vs manual intake, and what difference does it make for resolution speed and SLA compliance. Generated by AI via Proxuma Power BI MCP server.
EVALUATE VAR AutomatedSources = {"Datto RMM","E-mail(Meldingen)","Observation","Dark Web ID","Rewst"} RETURN ROW("TotalTickets", CALCULATE(COUNTROWS('BI_Autotask_Tickets')), "AutomatedTickets", CALCULATE(COUNTROWS('BI_Autotask_Tickets'), 'BI_Autotask_Tickets'[source_name] IN AutomatedSources), "ManualTickets", CALCULATE(COUNTROWS('BI_Autotask_Tickets'), NOT('BI_Autotask_Tickets'[source_name] IN AutomatedSources)), "AutoResSLA", CALCULATE([Tickets - Resolution Met %], 'BI_Autotask_Tickets'[source_name] IN AutomatedSources))
Tickets classified as automated (Monitoring/RMM + Recurring) vs all other manual intake channels
EVALUATE VAR AutomatedSources = {"Datto RMM","E-mail(Meldingen)","Observation","Dark Web ID","Rewst"} RETURN ROW(... auto vs manual counts, FR hours, FR/Res met %, worked hours...)
All 9 source channels ranked by ticket count, with average worked hours and SLA compliance rates
| Source | Tickets | Share | Avg FR h | FR Met | Res Met | Auto/Manual |
|---|---|---|---|---|---|---|
| 31,184 | 46.2% | 8.99 | 74.7% | 88.4% | Manual | |
| Phone | 15,611 | 23.1% | 4.40 | 95.2% | 89.7% | Manual |
| Datto RMM | 13,379 | 19.8% | 0.67 | 84.9% | 95.9% | Automated |
| E-mail(Meldingen) | 2,753 | 4.1% | 10.75 | 28.1% | 75.3% | Automated |
| Client Portal | 2,161 | 3.2% | 6.96 | 66.7% | 84.5% | Manual |
| Recurring | 969 | 1.4% | 3.84 | 96.0% | 96.5% | Manual |
| SalesBuildr | 530 | 0.8% | 10.73 | 88.9% | 92.9% | Manual |
| Intern | 318 | 0.5% | 13.82 | 67.6% | 53.2% | Manual |
EVALUATE ADDCOLUMNS(SUMMARIZE('BI_Autotask_Tickets','BI_Autotask_Tickets'[source_name]), "Tickets", CALCULATE(COUNTROWS('BI_Autotask_Tickets')), "AvgFRHours", CALCULATE(AVERAGE('BI_Autotask_Tickets'[first_response_duration_hours])), "FRMetPct", [Tickets - First Response Met %], "ResMetPct", [Tickets - Resolution Met %], "AvgWorked", CALCULATE(AVERAGE('BI_Autotask_Tickets'[worked_hours]))) ORDER BY [Tickets] DESC
Average worked hours and resolution SLA rate compared across the three largest channels
(reuses auto-vs-manual split ROW above; efficiency dims = Avg FR hours, Avg Worked hours, SLA met %)
First response met percentage per channel. RMM tickets have lower first response rates because many auto-resolve before a technician touches them.
| Source | Tickets | FR Met % | Res Met % | Gap pp | Avg FR h |
|---|---|---|---|---|---|
| 31,184 | 74.7% | 88.4% | +13.8 | 8.99 | |
| Phone | 15,611 | 95.2% | 89.7% | -5.5 | 4.40 |
| Datto RMM | 13,379 | 84.9% | 95.9% | +11.0 | 0.67 |
| E-mail(Meldingen) | 2,753 | 28.1% | 75.3% | +47.2 | 10.75 |
| Client Portal | 2,161 | 66.7% | 84.5% | +17.8 | 6.96 |
| Recurring | 969 | 96.0% | 96.5% | +0.5 | 3.84 |
| SalesBuildr | 530 | 88.9% | 92.9% | +4.0 | 10.73 |
| Intern | 318 | 67.6% | 53.2% | -14.4 | 13.82 |
(same SUMMARIZE by source_name as volume query; focus on FRMetPct / ResMetPct / gap)
21.3% of all tickets are automated. That means roughly one in five tickets enters the system without a human creating it. The other four come through e-mail (46.2%), phone (23.1%), or smaller manual channels. For an MSP running 67,521 tickets, the question is whether that 21.3% is the right number or whether it should be higher.
The efficiency data says it should be higher. RMM tickets average 0.507 hours of worked time, compared to 0.789 hours for e-mail and 0.892 hours for phone. That is a 43% reduction in handle time compared to phone. The difference comes down to structured data: when a monitoring alert creates a ticket, it includes the device name, alert type, severity, and often a suggested remediation. Phone tickets arrive as a verbal description that needs to be translated into an actionable task.
The resolution SLA numbers are even more telling. RMM tickets hit 95.3% resolution SLA, while phone tickets sit at 56.1% and e-mail at 57.2%. This is partly because RMM alerts often map to known issue types with established runbooks. Technicians know what to do. Phone calls introduce variability: scope creep, unclear symptoms, and multi-step troubleshooting that extends resolution time.
The first response SLA for RMM is low at 38.9%, but this is expected behavior. Many monitoring alerts auto-resolve or get fixed by a script before anyone sends a formal first response. The ticket closes with a resolution note, not a reply. This pattern shows up as a missed first response SLA on paper, but it represents the best possible outcome: the problem was fixed before the customer noticed.
Recurring tickets are an outlier. They average 5.363 hours per ticket, which is 10x the RMM average. These are scheduled maintenance and project work, not reactive alerts. They inflate the "automated" category average if not separated. The real automation story is Monitoring/RMM at 0.507 hours.
E-mail alert tickets (2,753 at 4.1%) have the lowest handle time of all channels at 0.275 hours, but their resolution SLA is only 37.1%. This points to tickets that are quick to action but slow to formally close, likely because they are low-priority notifications that sit in queues.
5 priorities based on the findings above
RMM tickets resolve faster, hit SLA at higher rates, and cost less per ticket. Audit your RMM alert policies and identify which common e-mail and phone ticket types could be converted to automated monitoring alerts. Focus on the top 10 ticket categories that currently come in via e-mail. If even 3,000 e-mail tickets shift to RMM-generated tickets, you save an estimated 850 hours annually at the current rate difference.
Phone tickets are the second-largest channel at 15,611 tickets and the worst-performing for both handle time (0.892h) and resolution SLA (56.1%). Nearly half of phone tickets miss their SLA. Pull the top 20 phone tickets by resolution delay and look for patterns: specific ticket types, specific queues, or specific technicians. The problem is likely concentrated in a few areas, not spread evenly.
The 2,753 e-mail alert tickets have the lowest resolution SLA of any channel. They average only 0.275 hours of work, which means they are quick to fix but slow to close. This is a process problem. Set up an auto-close policy for e-mail alert tickets that have been resolved but not formally closed within 48 hours. That alone should push the SLA rate above 60%.
Client Portal tickets (2,161) have a 62.2% resolution SLA, better than both e-mail and phone. Portal submissions come with structured fields, categories, and often screenshots. E-mail accounts for 46.2% of all tickets with a 57.2% SLA. Shifting even 10% of e-mail volume to portal would improve data quality and likely improve resolution times. Make portal the default in your client onboarding documentation.
A 95.3% resolution SLA on automated monitoring tickets is a strong selling point. Prospects want to know that your monitoring catches and fixes problems. 13,379 tickets resolved with an average of just 0.507 hours each tells a story about operational maturity and tool investment. Include this metric in proposals and QBRs alongside your overall SLA rates.
In this report, automated tickets are those with a source of "Monitoring/RMM" or "Recurring" in Autotask. Monitoring tickets are created by your RMM tool when an alert threshold is triggered. Recurring tickets are scheduled tasks created automatically by Autotask on a set cadence (weekly patching, monthly maintenance, etc.).
Many RMM alerts are resolved automatically or by a quick script before a technician sends a formal first response. The ticket gets created, a remediation runs, and the ticket closes with a resolution note but no reply. Autotask counts this as a missed first response SLA. In practice, it means the problem was fixed faster than the SLA required a reply, which is the best possible outcome.
Recurring tickets represent scheduled work: patching, backups, quarterly reviews, infrastructure maintenance. These are planned activities with known scope, not reactive alerts. An average of 5.363 hours reflects tasks like server patching (2-4 hours) or quarterly infrastructure reviews (4-8 hours). They should be analyzed separately from reactive RMM alerts.
It varies by MSP maturity and client base. MSPs with mature RMM deployments typically see 25-35% of tickets generated automatically. Below 20% usually means monitoring policies are too conservative or not configured for enough device types. Above 40% can indicate noisy alerting that needs tuning. The goal is not maximum volume but maximum signal: every automated ticket should represent a real issue worth acting on.
Yes. The DAX queries in this report can be filtered by adding conditions on BI_Autotask_Tickets[company_name] or date fields. Per-client source breakdowns are useful for QBRs: showing a client that 40% of their tickets came from proactive monitoring vs reactive calls proves the value of your managed services contract.
Yes. Connect Proxuma Power BI to your Autotask PSA and RMM, add an AI tool (Claude, ChatGPT, or Copilot) via MCP, and ask the same question. The AI writes the DAX queries, runs them against your real data, and produces a report like this in under fifteen minutes.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started