Generated by AI via Proxuma Power BI MCP server. Ticket type distribution across 67,521 records from Autotask PSA. Covers 5 ticket types, 41 issue categories, and 127 sub-types with priority and queue routing breakdowns.
Generated by AI via Proxuma Power BI MCP server. Ticket type distribution across 67,521 records from Autotask PSA. Covers 5 ticket types, 41 issue categories, and 127 sub-types with priority and queue routing breakdowns.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service desk managers, dispatch leads, and operations teams
How often: Daily for queue management, weekly for trend analysis, monthly for capacity planning
Generated by AI via Proxuma Power BI MCP server. Ticket type distribution across 67,521 records from Autotask PSA. Covers 5 ticket types, 41 issue categories, and 127 sub-types with priority and queue routing breakdowns.
EVALUATE TOPN(10, SUMMARIZECOLUMNS('BI_Autotask_Tickets'[ticket_category_name], "TicketCount", COUNTROWS('BI_Autotask_Tickets')), [TicketCount], DESC)
Ticket type breakdown by count, share, and average worked hours per ticket.
Incidents and Alerts combined account for 70.3% of all ticket volume — that's your reactive workload. Your engineers spend the majority of their time responding to things that have already gone wrong or been flagged by monitoring systems. Service Requests add another 18.7%, leaving Change Requests at 10.7% and Problems at a statistically small but disproportionately expensive 0.2%.
The average hours per ticket tells a sharply different story by type. Alerts average just 0.58h, suggesting many resolve quickly or with minimal triage. Problem tickets sit at the opposite end: 6.00h each. That 10x difference is exactly why classifying tickets correctly matters for capacity planning.
| Ticket Type | Count | % Share | Avg Hours | Volume |
|---|---|---|---|---|
| Incident | 27,664 | 41.0% | 0.80h | Reactive |
| Alert | 19,790 | 29.3% | 0.58h | Monitoring |
| Service Request | 12,653 | 18.7% | 1.05h | Proactive |
| Change Request | 7,247 | 10.7% | 1.12h | Planned |
| Problem | 167 | 0.2% | 6.00h | Root Cause |
EVALUATE
ADDCOLUMNS(
SUMMARIZE('BI_Autotask_Tickets', 'BI_Autotask_Tickets'[ticket_type_name]),
"Ticket Count", CALCULATE(COUNTROWS('BI_Autotask_Tickets')),
"Pct", DIVIDE(CALCULATE(COUNTROWS('BI_Autotask_Tickets')), COUNTROWS('BI_Autotask_Tickets')) * 100,
"Avg Hours", CALCULATE(AVERAGE('BI_Autotask_Tickets'[worked_hours]))
)
ORDER BY [Ticket Count] DESC
Priority breakdown for Incidents, with a notable anomaly in P2 vs P1 average hours.
The majority of Incidents land at P4 Low (55.1%) or P3 Medium (31.6%). That's expected for a healthy service desk — most user-reported issues are not critical. What catches the eye is the P2/P1 reversal: P2 High incidents average 1.87h, while P1 Critical average 1.71h. P1 tickets should be getting faster resolution, not slower, relative to P2. The most likely explanation is that P1 tickets trigger immediate escalation and dedicated resource assignment, which resolves them quickly even when complex. P2 tickets may sit in a standard queue longer before escalation, accumulating elapsed time.
For Alerts, the priority distribution is notably spread across all four levels (P4: 5,264, P1: 4,846, P3: 4,522, P2: 892). That even spread across P4 and P1 Alerts suggests your monitoring rules may not be calibrated consistently, or that alert severity in your RMM doesn't always translate cleanly to Autotask priority levels.
| Priority | Incidents | % of Incidents | Avg Hours | Signal |
|---|---|---|---|---|
| P4 — Low | 15,233 | 55.1% | 0.72h | Expected majority |
| P3 — Medium | 8,753 | 31.6% | 0.88h | Normal |
| Service/Change req. | 2,750 | 9.9% | 0.81h | Mixed |
| P2 — High | 774 | 2.8% | 1.87h | Anomaly: slower than P1 |
| P1 — Critical | 154 | 0.6% | 1.71h | Review calibration |
EVALUATE
ADDCOLUMNS(
SUMMARIZE('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[ticket_type_name],
'BI_Autotask_Tickets'[priority_name]
),
"Ticket Count", CALCULATE(COUNTROWS('BI_Autotask_Tickets')),
"Avg Hours", CALCULATE(AVERAGE('BI_Autotask_Tickets'[worked_hours]))
)
ORDER BY 'BI_Autotask_Tickets'[ticket_type_name], [Ticket Count] DESC
Queue routing distribution per ticket type reveals staffing alignment and automation opportunities.
L1 is your highest-volume queue for both Incidents (52%, 14,511 tickets) and Service Requests (68%, 8,571 tickets). That's a healthy pattern — frontline engineers absorbing the bulk of reactive and request work. The more interesting number is Alerts: 53% route through Centralized Services (10,546 tickets), with another 20% going to L1. That centralization is already a structural advantage. It means your organization has effectively created an automation-ready layer for monitoring events.
Problem tickets tell a different story: 76% land in the Customer Success queue (127 tickets). This makes sense if your CSMs are handling root-cause discussions with clients, but it's worth confirming whether those 127 Problem tickets are being worked with the depth they need, given that Problem tickets in a Professional Services context average 62.9 hours each when investigated seriously.
| Ticket Type | Primary Queue | Count | % of Type | Secondary Queue | Count |
|---|---|---|---|---|---|
| Incident | L1 | 14,511 | 52% | Centralized Services | 5,918 |
| Alert | Centralized Services | 10,546 | 53% | L1 | 4,022 |
| Service Request | L1 | 8,571 | 68% | L2 | 1,029 |
| Change Request | L1 | 4,272 | 59% | L2 | 928 |
| Problem | Customer Success | 127 | 76% | Centralized Services | 13 |
EVALUATE
ADDCOLUMNS(
SUMMARIZE('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[ticket_type_name],
'BI_Autotask_Tickets'[queue_name]
),
"Ticket Count", CALCULATE(COUNTROWS('BI_Autotask_Tickets')),
"Avg Hours", CALCULATE(AVERAGE('BI_Autotask_Tickets'[worked_hours])),
"FRM Pct", CALCULATE(AVERAGE('BI_Autotask_Tickets'[first_response_met]))
)
ORDER BY 'BI_Autotask_Tickets'[ticket_type_name], [Ticket Count] DESC
Five observations your service manager should act on.
Your team is spending the majority of its time responding to events rather than delivering planned work. That's not inherently wrong for an MSP, but it sets a ceiling on proactive service delivery. Tracking whether this ratio shifts over time is a useful signal of whether your clients' environments are becoming more stable.
19,790 Alerts at 0.58h each represents roughly 11,478 hours of engineer time. Even automating 30% of those alerts would free up over 3,400 hours annually. The question to ask: which alert categories consistently close without meaningful engineer action? Those are your first automation candidates.
When High-priority tickets consistently take longer than Critical-priority tickets, it usually means your P1 escalation path is working (dedicated engineers, immediate response) while P2 tickets drift in standard queues. Review whether your P2 SLA and escalation triggers match the actual urgency those tickets represent.
167 Problem tickets averaging 6.00h each represents 1,002 hours of root-cause investigation work. The low volume may reflect that your team doesn't always open formal Problem records after repeated incidents. Formalizing that process could surface systemic issues that individual incident resolution misses.
Having over half of your Alert volume flow through a single queue is structurally useful. It creates a consistent context for automation rules, playbooks, and RMM integrations. The next step is auditing that queue's resolution patterns to identify which alert types can be moved to auto-close or auto-remediate workflows.
Common questions about ticket category analysis in Autotask PSA.
An Incident is a manually created ticket for a user-reported disruption. An Alert is automatically generated by your RMM or monitoring tool when a threshold is breached. Alerts represent system-detected events; Incidents represent user-felt impact. The distinction matters for automation: Alerts are candidates for automated remediation, Incidents typically require human diagnosis first.
Problems in ITIL represent root-cause investigations — they're created after multiple related incidents to find and fix the underlying issue. They're rare because most teams don't have a formal Problem Management process, and when they do open them, they represent significant investigation effort. If your Problems are averaging less than 2 hours, that's a signal they're being used as a category label rather than a genuine root-cause workflow.
Not all, but many. With 19,790 alerts averaging 0.58h each, the question is: which ones consistently resolve without meaningful human action? Those are candidates for automated remediation. Alerts that regularly escalate to Incidents need different treatment — they may indicate real infrastructure gaps. Start by segmenting your alerts by issue type and looking at which categories have the lowest escalation rate and fastest resolution time.
P1 tickets trigger immediate response with dedicated engineers, getting resolved faster even when complex. P2 tickets may sit in a queue longer before escalation, accumulating elapsed time. It can also indicate that some complex issues get logged as P2 when they should be P1. Review your P2 escalation triggers and compare the types of work in each bucket to determine whether the priority definitions are being applied consistently across your team.
Match your queue routing to your ticket mix. L1 handles 52% of Incidents and 68% of Service Requests, so that's where volume lands. Centralized Services handles 53% of Alerts, so consider automation investment there first. Technical Alignment handles change and service work at roughly 3x the effort of L1, so staff those engineers for deep work rather than ticket throughput. Start with the queues that absorb the most hours, not the most tickets.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started