Engineer productivity ranking combined with SmileBack CSAT patterns. 15 engineers, 10,178 reviews, 26,869 total hours logged.
Engineer productivity ranking combined with SmileBack CSAT patterns. 15 engineers, 10,178 reviews, 26,869 total hours logged.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service managers, account managers, and MSP leadership tracking customer experience
How often: Weekly for trend monitoring, monthly for team reviews, quarterly for QBRs
Engineer productivity ranking combined with SmileBack CSAT patterns. 15 engineers, 10,178 reviews, 26,869 total hours logged.
BI_Autotask_Time_Entries. CSAT data from BI_SmileBack_Reviews using the -1/0/1 rating scale. SmileBack uses three ratings: positive (1), neutral (0), and negative (-1). The 92.2% positive rate means 9,385 out of 10,178 reviews were positive. Direct per-engineer CSAT is not available due to data model limitations — SmileBack reviews link to tickets, not to individual time entries.
Top 15 engineers sorted by billable rate. Color badges indicate performance tiers: green = above 80%, amber = 60-80%, red = below 60%.
| Engineer | CSAT | Ratings |
|---|---|---|
| Tracy Fitzpatrick | 92.8% | 180 |
| Maxwell Reed | 81.6% | 174 |
| Gregory Horn | 65.5% | 142 |
| Jonathon Burton | 87.2% | 133 |
| Brandon Bishop | 78.3% | 120 |
| Daniel Daniels | 84.3% | 115 |
| Andrew Roberts | 84.1% | 107 |
| John Mahoney | 81.1% | 90 |
| Mr. Craig Peck | 88.6% | 88 |
| Stephen Nelson | 86.0% | 86 |
| Rose Russell | 75.6% | 82 |
| Paula Lewis MD | 87.3% | 79 |
| Sean White | 90.5% | 74 |
| Nathan Curtis | 100% | 58 |
| Jeremy White | 71.2% | 52 |
EVALUATE
TOPN(
15,
FILTER(
ADDCOLUMNS(
SUMMARIZE(
'BI_SmileBack_Reviews',
'BI_Autotask_Tickets'[primary_resource_name]
),
"AvgRating", CALCULATE(AVERAGE('BI_SmileBack_Reviews'[rating])),
"TotalRatings", CALCULATE(COUNT('BI_SmileBack_Reviews'[rating]))
),
NOT ISBLANK('BI_Autotask_Tickets'[primary_resource_name])
),
[TotalRatings], DESC
)
ORDER BY [TotalRatings] DESC
Segmented bars showing each engineer's billable (teal) vs non-billable (slate) hours. Sorted by total hours descending.
EVALUATE
TOPN(15,
SUMMARIZECOLUMNS(
'BI_Autotask_Time_Entries'[resource_name],
"TotalHours", SUM('BI_Autotask_Time_Entries'[hours_worked]),
"BillableHrs", SUM('BI_Autotask_Time_Entries'[Billable Hours]),
"NonBillableHrs", SUM('BI_Autotask_Time_Entries'[Non billable Hours]),
"BillablePct", DIVIDE(
SUM('BI_Autotask_Time_Entries'[Billable Hours]),
SUM('BI_Autotask_Time_Entries'[hours_worked]),
0
)
),
[TotalHours], DESC
)
ORDER BY [TotalHours] DESC
Positive CSAT rate per ticket type. Because direct per-engineer CSAT is not available, ticket type patterns are the best proxy for where satisfaction issues originate.
Total hours logged per engineer over the last 12 months. The top three engineers account for 24.5% of all hours.
EVALUATE
TOPN(15,
SUMMARIZECOLUMNS(
'BI_Autotask_Time_Entries'[resource_name],
"TotalHours", SUM('BI_Autotask_Time_Entries'[hours_worked]),
"BillableHrs", SUM('BI_Autotask_Time_Entries'[Billable Hours]),
"NonBillableHrs", SUM('BI_Autotask_Time_Entries'[Non billable Hours]),
"TicketCount", DISTINCTCOUNT('BI_Autotask_Time_Entries'[ticket_id])
),
[TotalHours], DESC
)
ORDER BY [TotalHours] DESC
Engineers mapped by ticket volume (horizontal) and billable rate (vertical). High volume + high billable rate = your most efficient team members.
Engineers D, E, M, and N sit in the "Stars" quadrant with billable rates between 80.9% and 97.1% across 2,297 to 3,275 tickets each. These are the team members you should study, not just celebrate. What do they do differently with time entry discipline, ticket triage, or scope control? Whatever it is, that behavior should become the baseline for coaching others.
Engineer C logs 2,060 hours but only bills 55.6% of them, with just 99 unique tickets. That combination -- high hours, low billable rate, low ticket count -- typically means project work or internal tasks that are not being billed correctly, or time spent on work that should be categorized differently. Engineer I has a similar pattern at 52.7% across 489 tickets. Both need a time entry audit before a coaching conversation.
The gap between incident CSAT (86.4%) and alert CSAT (93.7%) is 7.3 percentage points. Engineers who handle a higher share of incidents will look worse in any future per-engineer CSAT analysis. Before drawing conclusions about individual satisfaction scores, you need to weight for ticket type mix. An engineer who resolves 500 incidents at 86% positive is performing better than one who handles 50 service requests at 90%.
5 priorities based on the findings above
Both engineers bill under 56% of their hours, which is well below the 70% team target. Before scheduling a coaching session, pull their time entries for the last 90 days and check: are they logging internal project work that should be billed? Are they spending time on tasks that could be delegated or automated? The fix might be a categorization problem, not a performance problem.
Engineers B, I, and J all handle significant ticket volumes but cannot keep their billable rate above 65%. Pair each of them with a Star engineer (D, E, M, or N) for a two-week shadow period focused on how the Star handles scope control and time entry hygiene. The goal is not to work harder, it is to work the same hours with better billing discipline.
Incidents generate the lowest CSAT at 86.4%. Study how Engineers D and E handle incidents (they carry high volumes with high billable rates) and document their workflows. A standardized playbook for initial response, escalation criteria, and client communication can bring the incident CSAT closer to the 90%+ range seen in other ticket types.
The top engineer logs 2,400 hours while the bottom logs 1,344 -- a 78% difference. A spread this wide suggests uneven ticket routing or availability gaps. Check your dispatch rules and queue assignments. Balanced workloads reduce burnout risk for your top performers and give lower-volume engineers more opportunities to build their skills.
The biggest gap in this analysis is the lack of direct per-engineer CSAT data. SmileBack reviews link to tickets, not to time entries. If you add a calculated column or a bridge table in your Power BI model that maps the primary resource on each ticket to its SmileBack rating, you unlock true per-engineer satisfaction tracking. That turns this report from a proxy analysis into a direct coaching tool.
SmileBack sends a survey when a ticket is closed and links the review to the ticket, not to a specific engineer. Since multiple engineers can log time on the same ticket, there is no one-to-one relationship between a SmileBack rating and an individual team member. The report uses ticket type CSAT as the best available proxy while combining it with per-engineer productivity data.
SmileBack uses a three-point scale: positive (rating = 1), neutral (rating = 0), and negative (rating = -1). The positive rate in this report is the percentage of reviews with a rating of 1. Out of 10,178 total reviews, 9,385 were positive (92.2%), 339 were neutral (3.3%), and 454 were negative (4.5%).
Most MSPs target between 65% and 80% billable rate for their service desk engineers. Rates above 90% are exceptional but can also signal that engineers are not getting enough training or development time. Rates below 60% usually indicate a time entry discipline issue, excessive internal project work, or a misalignment between the engineer's role and the work they are assigned.
Create a bridge table that links each ticket to its primary resource (the engineer with the most time entries on that ticket). Then join this bridge table to the SmileBack reviews table. This gives you a one-to-one relationship between an engineer and the CSAT rating on their primary tickets. You can build this as a DAX calculated table or as a Power Query step.
Engineers like Engineer L (1,433 hours, 17 tickets) and Engineer F (1,862 hours, 84 tickets) are likely working on long-running projects or implementations rather than standard service desk tickets. Their high hours with low ticket counts suggest project-based work where a single engagement spans many weeks. This is normal for senior engineers or consultants.
Yes. The DAX queries use all available data by default, but you can add a date filter on BI_Autotask_Time_Entries[date_worked] to narrow the time window. For quarterly reviews, filtering to the last 90 days gives a more focused picture of recent performance trends.
Yes. Connect Proxuma Power BI to your SmileBack and Autotask accounts, add an AI tool (Claude, ChatGPT, or Copilot) via MCP, and ask the same question. The AI writes the DAX queries, runs them against your real data, and produces a report like this one in under fifteen minutes.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started