“Top Performers vs Underperformers: A Data-Driven Technician Scorecard”
Autotask PSA Datto RMM Datto Backup Microsoft 365 SmileBack HubSpot IT Glue All reports
AI-GENERATED REPORT
You searched for:

Top Performers vs Underperformers: A Data-Driven Technician Scorecard

A multi-metric breakdown of 77 technicians across 50,752 hours from Autotask PSA time entries. This report ranks resources by billable percentage, ticket volume, hours per ticket, and client coverage to separate high performers from those who need coaching or workload rebalancing.

Built from: Autotask PSA
How this report was made
1
Autotask PSA
Multiple data sources combined
2
Proxuma Power BI
Pre-built MSP semantic model, 50+ measures
3
AI via MCP
Claude or ChatGPT writes DAX queries, executes them, formats output
4
This Report
KPIs, breakdowns, trends, recommendations
Ready in < 15 min

Top Performers vs Underperformers: A Data-Driven Technician Scorecard

A multi-metric breakdown of 77 technicians across 50,752 hours from Autotask PSA time entries. This report ranks resources by billable percentage, ticket volume, hours per ticket, and client coverage to separate high performers from those who need coaching or workload rebalancing.

The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.

Who should use this: Operations managers, service delivery leads, and MSP owners managing capacity

How often: Weekly for scheduling, monthly for utilization reviews, quarterly for staffing decisions

Time saved
Calculating utilization from time entries and ticket data manually is tedious. This report does it automatically.
Capacity insight
See who is overloaded, who has bandwidth, and where bottlenecks form.
Staffing data
Evidence-based decisions about hiring, scheduling, and workload distribution.
Report categoryResource & Capacity
Data sourceAutotask PSA · Datto RMM · Datto Backup · Microsoft 365 · SmileBack · HubSpot · IT Glue
RefreshReal-time via Power BI
Generation timeUnder 15 minutes
AI requiredClaude, ChatGPT or Copilot
AudienceOperations managers, service delivery leads
Where to find this in Proxuma
Power BI › Resources › Top Performers vs Underperformers: A ...
What you can measure in this report
Team Performance Summary
Technician Scorecard: Top 15 by Total Hours
Billable Percentage Ranking
Efficiency Matrix: Volume vs Complexity
Workload Distribution by Ticket Volume
Client Coverage per Resource
Key Findings
Recommended Actions
Frequently Asked Questions
TEAM SIZE
AVG BILLABLE %
TOP PERFORMER
AI-Generated Power BI Report
Top Performers vs Underperformers:
A Data-Driven Technician Scorecard

A multi-metric breakdown of 77 technicians across 50,752 hours from Autotask PSA time entries. This report ranks resources by billable percentage, ticket volume, hours per ticket, and client coverage to separate high performers from those who need coaching or workload rebalancing.

Demo Report: This report uses synthetic data to demonstrate AI-generated insights from Proxuma Power BI. The structure, DAX queries, and analysis reflect real MSP data patterns.
1.0 Team Performance Summary

Key workforce metrics from Autotask PSA time entries across 77 active resources.

TEAM SIZE
77
Active resources
AVG BILLABLE %
75.6%
38,364 of 50,752 hrs
TOP PERFORMER
97.1%
Tech N billable rate
LOWEST PERFORMER
52.7%
Tech I billable rate
What are these DAX queries? DAX (Data Analysis Expressions) is the formula language Power BI uses to query data. Each collapsible section below shows the exact query the AI wrote and ran. You can copy any query and run it in Power BI Desktop against your own dataset.
2.0 Technician Scorecard: Top 15 by Total Hours

All key metrics side by side. Color-coded by billable percentage: green (75%+), amber (60-75%), red (below 60%).

ResourceTicketsAvg FR (h)FR MetRes Met
Mr. David Cooper DDS21,4382.679,206 (42.9%)16,800 (78.4%)
Tracy Fitzpatrick3,6004.021,744 (48.4%)1,905 (52.9%)
Gregory Horn3,2403.252,219 (68.5%)2,125 (65.6%)
Brandon Bishop2,6415.041,518 (57.5%)1,661 (62.9%)
Jane Stewart2,62814.42334 (12.7%)933 (35.5%)
Daniel Daniels2,4443.501,947 (79.7%)1,786 (73.1%)
Maxwell Reed1,9062.801,407 (73.8%)1,246 (65.4%)
Andrew Roberts1,8997.531,059 (55.8%)788 (41.5%)
Jonathon Burton1,6803.10921 (54.8%)894 (53.2%)
David Collins1,67812.20352 (21.0%)701 (41.8%)
DAX Query: Multi-Metric Resource Scorecard
EVALUATE TOPN(10, SUMMARIZECOLUMNS('BI_Autotask_Tickets'[primary_resource_name], "TicketCount", COUNTROWS('BI_Autotask_Tickets'), "AvgFirstResponseHrs", AVERAGE('BI_Autotask_Tickets'[first_response_duration_hours]), "FirstResponseMet", CALCULATE(COUNTROWS('BI_Autotask_Tickets'), 'BI_Autotask_Tickets'[first_response_met] + 0 = 1), "ResolutionMet", CALCULATE(COUNTROWS('BI_Autotask_Tickets'), 'BI_Autotask_Tickets'[resolution_met] + 0 = 1)), [TicketCount], DESC)
3.0 Billable Percentage Ranking

All 15 resources sorted by billable percentage, highest to lowest. The team average is 75.6%.

Tech N
97.1%
Tech M
94.7%
Tech L
91.3%
Tech D
89.6%
Tech E
80.9%
Tech O
80.9%
Tech H
77.5%
Tech F
76.0%
Tech K
73.3%
Tech A
72.9%
Tech G
65.0%
Tech J
63.6%
Tech B
61.0%
Tech C
55.6%
Tech I
52.7%
DAX Query: Billable Percentage per Resource
EVALUATE
ADDCOLUMNS(
  SUMMARIZECOLUMNS(
    'BI_Autotask_Time_Entries'[resource_name],
    "TotalHours", SUM('BI_Autotask_Time_Entries'[hours_worked]),
    "BillableHours", CALCULATE(
      SUM('BI_Autotask_Time_Entries'[hours_worked]),
      'BI_Autotask_Time_Entries'[is_non_billable] = FALSE
    )
  ),
  "BillablePct", DIVIDE([BillableHours], [TotalHours])
)
ORDER BY [BillablePct] DESC
4.0 Efficiency Matrix: Volume vs Complexity

Plotting ticket volume against hours per ticket reveals four distinct resource profiles. High-ticket, low-hours resources handle quick tasks. Low-ticket, high-hours resources work on complex projects.

The data splits your technicians into clear groups. Tech N, Tech M, Tech D, Tech E, and Tech J all handle over 2,000 tickets with less than 1 hour per ticket on average. These are your rapid-response techs: password resets, quick fixes, and first-touch resolution.

On the other end, Tech L (84.32 hrs/ticket), Tech F (22.17), and Tech C (20.81) spend significantly more time per ticket. This is not necessarily a problem. These resources likely handle projects, infrastructure work, or complex escalations. The key question is whether those hours are being billed. Tech L bills 91.3% and Tech F bills 76.0%, which is healthy. Tech C at 55.6% is the concern.

Tech G sits in the middle with 11.95 hours per ticket and only 65.0% billable. That combination of moderate complexity and low billing rate deserves a closer look at how time is being categorized.

Tickets (volume) → Hrs/Ticket (complexity) → Low Volume / Complex High Volume / Complex Low Volume / Quick High Volume / Quick L F C G A I O B K H J E D M N
Bill% 75%+ Bill% 60-75% Bill% below 60%
5.0 Workload Distribution by Ticket Volume

How tickets are distributed across resource tiers. Some technicians handle thousands of tickets while others work on a handful of complex items.

75.4% OF TICKETS Top 5 resources
(2,000+ tickets)
22.2% OF TICKETS Mid-tier
(400-800 tickets)
2.0% OF TICKETS Project resources
(under 150 tickets)
DAX Query: Ticket Distribution by Resource
EVALUATE
ADDCOLUMNS(
  SUMMARIZECOLUMNS(
    'BI_Autotask_Time_Entries'[resource_name],
    "TicketCount", DISTINCTCOUNT('BI_Autotask_Time_Entries'[ticket_id])
  ),
  "Tier", SWITCH(
    TRUE(),
    [TicketCount] >= 2000, "High Volume",
    [TicketCount] >= 400, "Mid Volume",
    "Project / Low Volume"
  )
)
ORDER BY [TicketCount] DESC
6.0 Client Coverage per Resource

The number of unique clients each resource has worked with. A broad coverage means the technician touches many accounts; a narrow one suggests specialization or a dedicated assignment.

Tech M
Tech J
Tech N
Tech B
117
Tech D
115
Tech E
104
Tech K
84
Tech H
77
Tech C
54
Tech O
51
Tech A
46
Tech F
45
Tech G
44
Tech I
29
Tech L
25
7.0 Key Findings
!

Tech C and Tech I are billing under 56% of their time

Tech C logs 2,060 hours but only bills 55.6%. That is 915 non-billable hours. Tech I is worse at 52.7% with 735 non-billable hours. Combined, that is 1,650 hours of unbilled work. At even a conservative rate of $100/hr, that represents $165,000 in lost revenue potential. These two resources need an immediate time entry audit.

!

Tech L logs 84 hours per ticket on average

With only 17 tickets and 1,433 total hours, Tech L is clearly a project resource. The 91.3% billable rate is excellent, so this is not a billing problem. But 84 hours per ticket raises the question: are these tickets scoped correctly? Are time entries being logged against too few tickets? If a project runs 200 hours, it should probably be broken into subtasks.

!

Five resources handle 75% of all tickets

Tech N, Tech M, Tech D, Tech E, and Tech J collectively process 13,422 of the top 15's 17,806 tickets. That concentration creates a risk: if any of these five leave or burn out, a large share of ticket throughput disappears. Consider cross-training mid-tier resources (Tech B, Tech H, Tech K) to absorb overflow.

Tech N and Tech M set the benchmark for the team

Tech N runs at 97.1% billable across 3,275 tickets and 137 clients. Tech M is at 94.7% across 3,220 tickets and 146 clients. Both combine high volume, high billing rates, and broad client coverage. These are your model resources. Study what they do differently and use their patterns as the training standard.

!

Tech I covers only 29 clients with a low billable rate

Narrow client coverage combined with 52.7% billable suggests Tech I may be spending too much time on internal tasks, training, or administrative work. Alternatively, they could be assigned to a small group of clients with heavy non-billable support obligations. Either way, this resource needs a workload review.

8.0 Recommended Actions

Concrete steps to improve team utilization and balance workloads.

1

Audit non-billable hours for Tech C and Tech I

Pull the full time entry breakdown for both resources. Categorize every non-billable entry: internal meetings, training, admin, travel, or miscategorized billable work. Target: identify at least 200 hours per resource that should either be reclassified as billable or eliminated through process changes. Review within 30 days.

2

Break down Tech L's project tickets into subtasks

84 hours per ticket makes it nearly impossible to track progress or spot scope creep. Work with Tech L to restructure ongoing projects into smaller, trackable tickets. This gives better visibility into where time goes and makes it easier to flag when a project is running over budget.

3

Build a cross-training plan for the top 5 ticket handlers

Tech N, M, D, E, and J handle 75% of ticket volume. Create a knowledge-sharing program where mid-tier techs (B, H, K) shadow the top performers for two weeks. Goal: increase the number of resources who can handle 1,000+ tickets per year from 5 to 8, reducing single-point-of-failure risk.

9.0 Frequently Asked Questions
How is billable percentage calculated?

Billable percentage is calculated as Billable Hours divided by Total Hours. Billable hours are time entries where is_non_billable is FALSE in the BI_Autotask_Time_Entries table. A resource logging 1,500 billable hours out of 2,000 total hours has a 75% billable rate.

What is a good billable percentage for an MSP technician?

Industry benchmarks vary, but most MSPs target 70-80% for service desk technicians and 60-70% for senior engineers who also handle internal projects. Resources below 60% typically need a workload audit to understand where non-billable time is going.

Why are technician names anonymized?

This is a demo report using synthetic data. In your own deployment, Proxuma Power BI shows the actual resource names from Autotask. The anonymized labels (Tech A, Tech B, etc.) are just placeholders for public demonstration.

Why does Tech L have 84 hours per ticket?

Tech L handles only 17 tickets with 1,433 total hours, which points to project-based work. Long-running infrastructure deployments, migrations, or consulting engagements often have few tickets but many hours each. The metric is not inherently bad but suggests the ticketing structure should be reviewed for better granularity.

Can I filter this report by date range or department?

Yes. Add a date filter to the DAX queries using the date_worked column in BI_Autotask_Time_Entries. You can also filter by resource_role or queue_name to segment by department. The Proxuma Power BI model supports all standard Autotask dimensions.

Can I run these DAX queries on my own Power BI dataset?

Yes. Copy any query from the toggles above and paste it into DAX Studio or the Power BI Desktop performance analyzer. The queries reference standard Proxuma data model tables and measures that exist in every Proxuma Power BI deployment.

Generate this report from your own data

Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.

See more reports Get started