SLA compliance, CSAT scores, and ticket volume combined into a single health status per client. Generated by AI via Proxuma Power BI MCP server.
SLA compliance, CSAT scores, and ticket volume combined into a single health status per client. Generated by AI via Proxuma Power BI MCP server.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Account managers, MSP owners, and service delivery leads
How often: Monthly for client reviews, quarterly for QBRs, on-demand when client signals change
SLA compliance, CSAT scores, and ticket volume combined into a single health status per client. Generated by AI via Proxuma Power BI MCP server.
EVALUATE
ROW(
"ClientsAnalyzed", COUNTROWS(
TOPN(10, VALUES('BI_Autotask_Companies'[company_name]),
CALCULATE(COUNTROWS('BI_Autotask_Tickets')), DESC)),
"AvgCSAT", AVERAGE('BI_SmileBack_Reviews'[rating]),
"AvgFRMet", DIVIDE(
CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[first_response_met] + 0 = 1),
COUNTROWS('BI_Autotask_Tickets'))
)
All 10 clients ranked by health status. Scoring: FR Met >60% = good, 40-60% = warning, <40% = critical. Resolution Met >60% = good, 40-60% = warning, <40% = critical. CSAT >85% = good, 70-85% = warning, <70% = critical. Overall status uses the worst individual score.
| Metric | Value |
|---|---|
| Companies | 550 (531 active) |
| Active Contracts | 1,377 |
| CSAT Rating | 87.8% |
| CSAT Reviews | 10,178 |
EVALUATE ROW("TotalCompanies", COUNTROWS('BI_Autotask_Companies'), "ActiveCompanies", CALCULATE(COUNTROWS('BI_Autotask_Companies'), 'BI_Autotask_Companies'[status]), "ActiveContracts", CALCULATE(COUNTROWS('BI_Autotask_Contracts'), 'BI_Autotask_Contracts'[contract_status_name] = "Active"), "CSATReviews", COUNTROWS('BI_SmileBack_Reviews'), "AvgCSAT", AVERAGE('BI_SmileBack_Reviews'[rating]))
The 5 clients classified as critical, with specific metrics showing where each one is failing
CSAT: 52.5% (critical) · FR Met: 37.8% (critical) · Resolution Met: 71.2% (good) · 1,728 tickets, 875 hours worked. This is the most urgent account. A CSAT of 52.5% means roughly half of all survey responses are negative. The first response SLA is failing at 37.8%, so customers are waiting too long for initial contact. Despite decent resolution rates, the poor first impression is driving dissatisfaction. The gap between resolution compliance (71.2%) and first response compliance (37.8%) suggests the team eventually solves tickets but takes too long to acknowledge them.
CSAT: 70.0% (warning) · FR Met: 30.7% (critical) · Res Met: 47.3% (warning) · 1,803 tickets, 949 hours worked. Every metric is in warning or critical territory. Only 30.7% of tickets get a first response within SLA, and fewer than half are resolved on time. The CSAT at 70.0% confirms what the SLA numbers suggest: this client is receiving consistently poor service. With 1,803 tickets, the volume is high enough that the problem is systemic.
FR Met: 28.8% (critical) · Res Met: 50.4% (warning) · CSAT: 88.6% (good) · 6,381 tickets, 1,091 hours worked. Your highest-volume client with 6,381 tickets, and only 28.8% get a first response within SLA. The CSAT is still 88.6%, which is a paradox: customers are happy with eventual outcomes but the SLA numbers are terrible. The low hours-per-ticket ratio (0.17h per ticket) suggests many tickets are quick fixes or automated closures. A 28.8% first response rate is a contractual risk regardless of satisfaction.
FR Met: 31.7% (critical) · Res Met: 52.1% (warning) · CSAT: 80.6% (warning) · 2,180 tickets, 823 hours worked. First response is failing badly at 31.7%, and resolution is below the 60% threshold at 52.1%. The CSAT of 80.6% is in warning range. This client has not yet reached the anger threshold, but the combination of poor SLA and middling satisfaction makes them a candidate for decline over the next quarter if nothing changes.
FR Met: 30.7% (critical) · Res Met: 47.4% (warning) · CSAT: 81.0% (warning) · 994 tickets, 476 hours worked. Similar pattern to Price-Gomez: poor first response, below-target resolution, and CSAT in warning range. The lower ticket volume (994) means targeted improvements could move these numbers faster than at larger accounts.
EVALUATE
VAR _TopClients =
TOPN(30,
ADDCOLUMNS(
VALUES('BI_Autotask_Companies'[company_name]),
"Tickets", CALCULATE(COUNTROWS('BI_Autotask_Tickets')),
"WorkedHours", CALCULATE(SUM('BI_Autotask_Tickets'[worked_hours])),
"FRMetPct", DIVIDE(
CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[first_response_met] + 0 = 1),
CALCULATE(COUNTROWS('BI_Autotask_Tickets'))),
"ResMetPct", DIVIDE(
CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[resolution_met] + 0 = 1),
CALCULATE(COUNTROWS('BI_Autotask_Tickets'))),
"AvgCSAT", CALCULATE(AVERAGE('BI_SmileBack_Reviews'[rating]))
),
[Tickets], DESC)
RETURN
FILTER(_TopClients,
[FRMetPct] < 0.4 || [ResMetPct] < 0.4 || [AvgCSAT] < 0.7)
ORDER BY [AvgCSAT] ASC
Percentage of tickets where the first response was delivered within the SLA target. The 60% threshold separates acceptable from at-risk performance.
EVALUATE
ADDCOLUMNS(
VALUES('BI_Autotask_Companies'[company_name]),
"FRMetPct", DIVIDE(
CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[first_response_met] + 0 = 1),
CALCULATE(COUNTROWS('BI_Autotask_Tickets'))),
"Tickets", CALCULATE(COUNTROWS('BI_Autotask_Tickets'))
)
ORDER BY [FRMetPct] DESC
Only 1 out of 10 clients (Wall PLC) has a clean bill of health across all three dimensions: first response SLA above 60%, resolution SLA above 60%, and CSAT above 85%. That is a 10% healthy rate across your top clients by volume.
First response SLA is the weakest metric across the board. Seven clients fall below the 60% threshold, and five are below 40%. The average first response compliance across all 10 clients is 46.4%. This suggests the issue is not client-specific. It is likely a staffing, routing, or dispatching problem that affects the entire service desk.
Nelson Taylor Hicks is the most urgent case. Their CSAT of 52.5% is the lowest in the portfolio and well into critical territory. Combined with a 37.8% first response rate, this client is getting slow initial contact and leaving unhappy. The resolution rate (71.2%) is decent, which means the work gets done eventually, but the perception of poor service is already set by the time the first response arrives.
Rivers Rogers Mitchell presents an interesting case. They generate 6,381 tickets (the highest volume by far) with a first response rate of only 28.8%, yet their CSAT sits at 88.6%. The low hours-per-ticket (0.17h) suggests a high proportion of automated or quick-close tickets. The CSAT may reflect satisfaction with outcomes rather than speed. A 28.8% FR rate is still a contractual liability if SLA penalties apply.
CSAT scores are generally stronger than SLA metrics. Only 1 client (Nelson Taylor Hicks) has CSAT below 70%, while 4 clients sit above 85%. This indicates that end-user satisfaction is less damaged than the SLA numbers suggest, which gives you a window to fix the underlying operational issues before customer sentiment catches up.
EVALUATE
ROW(
"TotalTickets", COUNTROWS('BI_Autotask_Tickets'),
"OverallFRMet", DIVIDE(
CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[first_response_met] + 0 = 1),
COUNTROWS('BI_Autotask_Tickets')),
"OverallResMet", DIVIDE(
CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[resolution_met] + 0 = 1),
COUNTROWS('BI_Autotask_Tickets')),
"OverallCSAT", AVERAGE('BI_SmileBack_Reviews'[rating]),
"TotalWorkedHours", SUM('BI_Autotask_Tickets'[worked_hours])
)
5 priorities based on the findings above
A CSAT of 52.5% means this client is actively unhappy. Combined with a 37.8% first response rate, the experience is slow and unsatisfying. Pull the last 30 days of tickets for this client, identify the top 5 unhappy survey responses, and schedule an escalation call with their decision-maker. Do not wait for the next QBR. This is a retention risk.
Every metric is below target: 30.7% FR, 47.3% resolution, 70.0% CSAT. With 1,803 tickets this is not a small account. Look at ticket routing rules, dispatcher queue times, and whether this client's tickets are being deprioritized by the automated triage. A root-cause fix here will move all three numbers.
Seven out of ten clients are below the 60% FR threshold. This is not a per-client problem. Check whether the service desk is understaffed during peak hours, whether auto-acknowledgment is enabled, and whether the SLA clock starts at ticket creation or first assignment. A single operational fix (auto-response, dispatcher staffing, or SLA configuration) could lift all seven accounts simultaneously.
This client generates 6,381 tickets with only 1,091 hours worked (0.17h per ticket). The 28.8% FR rate may be inflated by automated or bulk tickets that should not have SLA targets. Review whether all ticket types for this client are correctly classified. Excluding informational or monitoring tickets from SLA measurement would give a more accurate picture.
Wall PLC is the only client with healthy scores across all three metrics: 73.6% FR, 72.5% resolution, 89.4% CSAT. Study what is different about how their tickets are handled. Same technicians? Different queue? Faster escalation? Whatever is working for Wall PLC should be replicated for the accounts that are struggling.
-- Tickets per client with SLA breakdown
EVALUATE
ADDCOLUMNS(
VALUES('BI_Autotask_Companies'[company_name]),
"TotalTickets", CALCULATE(COUNTROWS('BI_Autotask_Tickets')),
"FRMet", CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[first_response_met] + 0 = 1),
"FRBreached", CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[first_response_met] + 0 = 0),
"ResMet", CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[resolution_met] + 0 = 1),
"ResBreached", CALCULATE(COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[resolution_met] + 0 = 0),
"AvgCSAT", CALCULATE(AVERAGE('BI_SmileBack_Reviews'[rating])),
"WorkedHours", CALCULATE(SUM('BI_Autotask_Tickets'[worked_hours]))
)
ORDER BY [TotalTickets] DESC
Three metrics are scored independently: First Response SLA Met %, Resolution SLA Met %, and CSAT %. Each is classified as good (green), warning (amber), or critical (red) based on defined thresholds. The overall health status is set to the worst individual score. So a client with good CSAT but critical FR compliance gets a "Critical" status.
First Response Met: above 60% is good, 40-60% is warning, below 40% is critical. Resolution Met: above 60% is good, 40-60% is warning, below 40% is critical. CSAT: above 85% is good, 70-85% is warning, below 70% is critical. These thresholds are configurable in the DAX query and can be adjusted to match your internal SLA targets.
SmileBack sends satisfaction surveys when tickets are closed in Autotask. The CSAT percentage shown here represents the proportion of positive (happy) responses out of all responses for that client. Proxuma Power BI pulls SmileBack data automatically and links it to the matching Autotask company record.
When 7 out of 10 top clients are below the 60% threshold, the problem is usually operational rather than client-specific. Common causes: SLA timers starting at ticket creation rather than assignment, no auto-acknowledgment configured, understaffing during peak hours, or tickets sitting in a dispatch queue too long before being picked up. Check your Autotask SLA configuration and dispatcher workflow first.
Yes. Rivers Rogers Mitchell has an 88.6% CSAT but a 28.8% first response rate, which puts them in critical status. High CSAT with poor SLA means the client is happy with outcomes but the operational metrics are failing. This is a contractual risk if SLA penalties exist, even if the client is not complaining yet.
Yes. Connect Proxuma Power BI to your Autotask and SmileBack accounts, add an AI tool (Claude, ChatGPT, or Copilot) via MCP, and ask the same question. The AI writes the DAX queries, runs them against your real data, and produces a report like this in under fifteen minutes.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started