A breakdown of 33,271 worked hours across 67,521 tickets — showing which priorities, queues, and clients consume the most effort per ticket, and what that means for your capacity and pricing.
A breakdown of 33,271 worked hours across 67,521 tickets — showing which priorities, queues, and clients consume the most effort per ticket, and what that means for your capacity and pricing.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service desk managers, dispatch leads, and operations teams
How often: Daily for queue management, weekly for trend analysis, monthly for capacity planning
A breakdown of 33,271 worked hours across 67,521 tickets — showing which priorities, queues, and clients consume the most effort per ticket, and what that means for your capacity and pricing.
The overall average of 0.49 hours per ticket is pulled down by the large volume of fast L1 Support tickets. The real story lives in the segments: P2 High tickets take nearly two hours each, Professional Services tickets average close to four hours, and some clients generate almost zero effort per ticket because their work is entirely automated monitoring alerts.
EVALUATE ROW("TotalTickets", COUNTROWS('BI_Autotask_Tickets'), "TicketsWithTime", DISTINCTCOUNT('BI_Autotask_Time_Entries'[ticket_id]), "AvgHoursPerTicket", DIVIDE(SUM('BI_Autotask_Time_Entries'[hours_worked]), DISTINCTCOUNT('BI_Autotask_Time_Entries'[ticket_id])), "TotalHours", SUM('BI_Autotask_Time_Entries'[hours_worked]))
Priority levels reveal how complexity is distributed across your ticket types. The most striking finding: P2 High tickets require almost twice the effort of P1 Critical tickets. This counterintuitive result tells you that your P2 tier captures technically complex incidents, not just urgent escalations. P1 tickets often get resolved quickly by senior engineers who know exactly what to do. P2 tickets require sustained investigation and troubleshooting.
| Priority | Avg Hours | Ticket Count | Effort Bar | Signal |
|---|---|---|---|---|
| P2 - High | 1.99h | 1,788 | Complex incidents | |
| P1 - Critical | 1.04h | 5,019 | Urgent, fast resolve | |
| Service / Change Req. | 0.92h | 15,584 | Planned work | |
| P4 - Low | 0.90h | 30,415 | Routine work | |
| P3 - Medium | 0.82h | 14,715 | Standard tickets |
P4 Low and P3 Medium tickets cluster around 0.82-0.90 hours, which is the normal baseline for typical support work. The 0.17h gap between P2 and P1 is significant enough to warrant a review of how your team categorizes complex incoming issues. If P2 is consistently harder than P1, your prioritization logic may need adjustment.
EVALUATE
ADDCOLUMNS(
SUMMARIZE('BI_Autotask_Tickets', 'BI_Autotask_Tickets'[priority_name]),
"Avg Hours",
AVERAGEX(
FILTER('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[priority_name] = EARLIER('BI_Autotask_Tickets'[priority_name])
&& 'BI_Autotask_Tickets'[worked_hours] > 0
),
'BI_Autotask_Tickets'[worked_hours]
),
"Ticket Count",
COUNTROWS(FILTER('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[priority_name] = EARLIER('BI_Autotask_Tickets'[priority_name])
))
)
ORDER BY [Avg Hours] DESC
Client-level effort ratios expose something that ticket count alone never shows. Two clients with similar ticket volumes can have wildly different actual support burdens. Martin Group averages 0.74 hours per ticket across 2,775 tickets, suggesting complex infrastructure or a user base that generates technically demanding issues. At the other end, Blanchard-Glenn logs 2,364 tickets with essentially zero effort per ticket — all automated, all resolved without human intervention.
| Client | Tickets | Worked Hours | Avg h / Ticket | Effort Level |
|---|---|---|---|---|
| Martin Group | 2,775 | 2,046h | 0.74h | High effort |
| Lewis LLC | 1,758 | 1,206h | 0.69h | High effort |
| Craig-Huynh | 5,458 | 3,575h | 0.65h | Normal |
| Wall PLC | 2,376 | 1,479h | 0.62h | Normal |
| Little Group | 5,290 | 3,050h | 0.58h | Normal |
| Thompson, Contreras and Rios | 1,803 | 949h | 0.53h | Normal |
| Ramos Group | 1,728 | 875h | 0.51h | Normal |
| Price-Gomez | 2,180 | 823h | 0.38h | Efficient |
| Rivers, Rogers and Mitchell | 6,381 | 1,090h | 0.17h | Alert noise |
| Blanchard-Glenn | 2,364 | 9h | 0.004h | Fully automated |
Rivers, Rogers and Mitchell generates the highest raw ticket count in the dataset (6,381 tickets) but only 0.17 hours per ticket. That gap tells you these are monitoring alerts or auto-created tickets that close without meaningful engineer time. Blanchard-Glenn is even more extreme at 0.004 hours per ticket across 2,364 tickets — essentially a fully automated client environment with almost no human support demand.
For pricing conversations, Martin Group and Lewis LLC deserve scrutiny. Both clients generate above-average effort per ticket across significant volumes. If their contracts were priced on a per-ticket assumption of 0.49 hours, the actual cost of service is running 35-51% higher than the pricing model assumed.
EVALUATE
TOPN(10,
ADDCOLUMNS(
SUMMARIZE('BI_Autotask_Tickets', 'BI_Autotask_Tickets'[company_name]),
"Avg Hours Per Ticket",
AVERAGEX(
FILTER('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[company_name] = EARLIER('BI_Autotask_Tickets'[company_name])
&& 'BI_Autotask_Tickets'[worked_hours] > 0
),
'BI_Autotask_Tickets'[worked_hours]
),
"Ticket Count",
COUNTROWS(FILTER('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[company_name] = EARLIER('BI_Autotask_Tickets'[company_name])
)),
"Total Hours",
SUMX(FILTER('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[company_name] = EARLIER('BI_Autotask_Tickets'[company_name])
), 'BI_Autotask_Tickets'[worked_hours])
),
[Ticket Count], DESC
)
ORDER BY [Avg Hours Per Ticket] DESC
Queue-level data shows the clearest picture of work type across your operation. The spread is enormous: Recurring (Parked) tickets average 5.77 hours each, while L1 Support tickets average 0.57 hours. These are not the same kind of work — they shouldn't be measured by the same yardstick, and they certainly shouldn't be priced the same.
| Queue | Avg Hours | Ticket Count | Work Category |
|---|---|---|---|
| Recurring (Parked) | 5.77h | 98 | Long-running tasks |
| Professional Services | 3.88h | 546 | Project work |
| Technical Alignment | 3.03h | 2,316 | vCIO / advisory |
| Post Sale | 2.88h | 209 | Implementation |
| Onsite Support | 2.40h | 705 | Field visits |
| L3 Support | 1.97h | 193 | Senior escalation |
| L2 Support | 1.28h | 7,889 | Mid-tier support |
| Centralized Services | 0.83h | 17,082 | Managed services |
| L1 Support | 0.57h | 31,378 | First-line triage |
L1 Support holds 46% of all tickets (31,378) but averages just 34 minutes each. That's your volume absorber — the queue that makes the overall average look small. The genuinely expensive work sits in the top four queues, which together account for fewer than 4% of total tickets but represent a very different cost structure per ticket.
EVALUATE
ADDCOLUMNS(
SUMMARIZE('BI_Autotask_Tickets', 'BI_Autotask_Tickets'[queue_name]),
"Avg Hours",
AVERAGEX(
FILTER('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[queue_name] = EARLIER('BI_Autotask_Tickets'[queue_name])
&& 'BI_Autotask_Tickets'[worked_hours] > 0
),
'BI_Autotask_Tickets'[worked_hours]
),
"Ticket Count",
COUNTROWS(FILTER('BI_Autotask_Tickets',
'BI_Autotask_Tickets'[queue_name] = EARLIER('BI_Autotask_Tickets'[queue_name])
))
)
ORDER BY [Avg Hours] DESC
P2 High averages 1.99h per ticket, compared to 1.04h for P1 Critical. This indicates your P2 tier captures technically complex incidents that require sustained investigation rather than fast escalations. Consider whether your priority definitions need a review — or whether P2 tickets need dedicated handling to prevent engineer context switching.
At 0.74h per ticket across 2,775 tickets, Martin Group generates approximately 2,046 hours of worked time in total. If their contract pricing assumed the 0.49h average, the real cost of servicing this account is materially higher than projected. This warrants a profitability review before the next renewal conversation.
The highest-volume client in the dataset averages just 10 minutes per ticket. This is a clear signal that the vast majority of these tickets are auto-created monitoring alerts or automated processes, not human-generated support requests. Confirming this and separating alert tickets from genuine support tickets would give a cleaner picture of actual per-client support demand.
2,364 tickets with a combined total of just 9 hours worked — an average of 0.004 hours per ticket. This is a well-automated client environment. If this is intentional and matches their contract type, it's a useful model to reference when evaluating other clients that could benefit from similar automation investment.
31,378 tickets flowing through L1 with a 34-minute average resolution time points to effective first-line triage. The overall 0.49h average is largely a product of this high-volume, low-effort queue pulling the number down. When assessing capacity, it's worth separating L1 volume from the more intensive queues to get an accurate read on senior engineer workload.
The measure divides total worked hours logged against tickets by the number of tickets in scope. In the priority and queue breakdowns, the calculation filters to tickets with worked_hours greater than zero to avoid skewing the average with tickets that have no time logged at all. The overall figure of 0.49h includes all 67,521 tickets regardless of whether time was logged.
P1 Critical tickets typically trigger your fastest, most experienced engineers who can often resolve known issues quickly. P2 High tickets tend to be technically complex problems that don't meet the "everything is down" threshold, but require careful investigation, root cause analysis, and testing before resolution. The higher effort per ticket for P2 is a common pattern in MSP operations and usually indicates good triage practice — your team reserves P1 for genuine emergencies and uses P2 for complex-but-not-catastrophic incidents.
Yes, for many purposes. Automated monitoring alerts, RMM-generated tickets, and scripted auto-close tickets dilute the metric and make it harder to assess genuine engineer productivity. Running this report with a filter excluding tickets created by your RMM or monitoring system will give a cleaner baseline for staffing and pricing decisions. Clients like Blanchard-Glenn (0.004h/ticket) and Rivers, Rogers and Mitchell (0.17h/ticket) are likely candidates for this kind of segmentation.
If your per-seat or per-device contracts were priced assuming a certain hours-per-ticket baseline, clients who run significantly above that baseline are consuming more service than their contract anticipates. Martin Group at 0.74h/ticket vs. the 0.49h average means their real service cost is about 51% higher per ticket than the average. Combining this report with a profitability-per-client analysis will show which accounts need repricing or scope adjustment at renewal.
There's no universal benchmark because the figure depends heavily on ticket mix. An MSP running a high proportion of L1 Help Desk tickets will show a lower average than one focused on project work and Technical Alignment. What matters is the trend over time and the segmented view by queue. If your overall average is rising quarter over quarter, dig into which queues are driving it. If L1 support is creeping from 34 minutes to 50 minutes, that's a training or tooling issue. If Professional Services is rising, it may reflect increasingly complex client environments that justify rate adjustments.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started