A cross-source analysis combining SmileBack CSAT, Datto RMM alert volume, and Autotask SLA performance to build a single service quality picture per client. This report maps how alert noise, response times, and customer satisfaction interact across 67,521 tickets and 135,387 alerts.
A cross-source analysis combining SmileBack CSAT, Datto RMM alert volume, and Autotask SLA performance to build a single service quality picture per client. This report maps how alert noise, response times, and customer satisfaction interact across 67,521 tickets and 135,387 alerts.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: MSP operations teams and service delivery managers
How often: As needed for specific analysis or reporting requirements
A cross-source analysis combining SmileBack CSAT, Datto RMM alert volume, and Autotask SLA performance to build a single service quality picture per client. This report maps how alert noise, response times, and customer satisfaction interact across 67,521 tickets and 135,387 alerts.
Aggregated metrics from SmileBack, Datto RMM, and Autotask across all clients in the last 12 months.
Top 10 clients by alert volume with CSAT, first response SLA, and resolution SLA side by side. Color-coded badges show where each metric lands relative to targets.
| Metric | Value |
|---|---|
| Current | 87.7% |
| Last Year | 78.3% |
| Ratings | 10,178 |
EVALUATE ROW("CSATAvg", [CSAT - Average Rating], "CSATLastYear", [CSAT - Average Rating - Last Year], "Ratings", [CSAT - Total Ratings])
Comparing alert noise against customer satisfaction. Bars show alert count; teal markers show CSAT. Clients with high alerts and low CSAT need immediate attention.
EVALUATE ROW("ResolutionMet", [Tickets - Resolution Met %], "FirstHourFix", [Tickets - First Hour Fix %], "SameDayRes", [Tickets - Same Day Resolution %], "ClosureRate", [Tickets - Closure Rate %], "TotalTickets", [Tickets - Count - Created])
First response and resolution SLA compliance shown as donut charts for the overall portfolio, plus a per-client comparison bar.
First Response vs Resolution SLA by Client
EVALUATE
TOPN(10,
ADDCOLUMNS(
SUMMARIZE(Bridge_All_Companies,
Bridge_All_Companies[company_id]),
"CompName", CALCULATE(MAX('BI_Autotask_Companies'[company_name])),
"CSAT", [CSAT - Average Rating],
"AlertCount", CALCULATE(COUNTROWS('BI_Datto_Rmm_Alerts')),
"FRMet", [Tickets - First Response Met %],
"ResMet", [Tickets - Resolution Met %]
),
[AlertCount], DESC
)
How do the three sides of the triangle relate to each other? Directional observations based on the top-10 client data.
| CSAT | FR SLA | Res SLA | Alerts | |
|---|---|---|---|---|
| CSAT | -- | Weak + | Moderate + | Neutral |
| FR SLA | Weak + | -- | Strong + | Moderate - |
| Res SLA | Moderate + | Strong + | -- | Weak - |
| Alerts | Neutral | Moderate - | Weak - | -- |
At 80.1% against a 90% target, the first response gap represents roughly 13,400 tickets where the initial reply was late. Client J is the worst offender at 43.2%, and Client G sits at 68.6%. These two clients alone drag the portfolio average down. Resolution SLA (90.2%) barely meets target, suggesting the team catches up after the slow start.
With 26,873 alerts (nearly 3x the next-highest client), Client A floods the queue. Despite that, satisfaction stays high. The likely explanation: the team knows Client A well and handles their issues quickly, even if first response SLA suffers (73.7%). The volume is a capacity problem, not a quality problem.
Client C is the only account where all three triangle sides are red or amber: 70.0% CSAT, 75.4% first response, 87.1% resolution. This is the clearest signal of a service quality problem. With 7,430 alerts, the noise level is high enough to explain the slow responses -- but the low CSAT means clients are noticing.
100% CSAT, 92.3% first response, 97.5% resolution, and 2,646 alerts. This is the benchmark. Whatever the team is doing differently for Client I -- dedicated resources, proactive monitoring, better documentation -- should be documented and applied to struggling accounts.
Alert volume alone does not predict satisfaction. Client A has the most alerts by a wide margin (26,873) yet maintains 89.4% CSAT. Client I has 2,646 alerts and hits 100% CSAT. The relationship between noise and satisfaction is weak at best. What matters more is how the alerts translate into ticket handling quality.
First response SLA is the weakest link in the triangle. At 80.1% overall, it lags resolution SLA by a full 10 points. Five of the ten clients fall below 85% on first response, while only two fall below 85% on resolution. The pattern is consistent: the team is slow to pick up tickets but resolves them within target once they start working. This points to a triage or queue management issue rather than a skills gap.
Low CSAT and low SLA do not always overlap. Client F has 73.6% CSAT but strong SLA numbers (87.5% FR, 93.7% Res). That means dissatisfaction comes from something other than speed. It could be communication quality, recurring issues, or unmet expectations around scope. Client J flips the pattern: 88.6% CSAT despite 43.2% first response. Some clients care less about speed and more about outcomes.
The top 3 alert generators account for 43,610 alerts (32.2% of total). Clients A, B, and C together produce nearly a third of all RMM noise. Reducing alert fatigue at these three accounts through better thresholds, suppression rules, or monitor tuning would have the biggest impact on the overall alert-to-ticket ratio and free up dispatcher bandwidth for faster first responses.
Practical steps to close the quality gaps identified in this report.
Pull all 43,610 alerts from these three clients and categorize by type, severity, and whether they generated a ticket. Identify monitors that fire repeatedly without action and suppress or tune them. Target: reduce alert volume by 30% within 60 days without missing genuine incidents.
Client C is the only account where CSAT, first response, and resolution all underperform. Schedule a service review meeting. Review the last 90 days of tickets, pull SmileBack comments (not just scores), and identify the top 3 recurring complaint categories. Set a 30-day CSAT improvement target of 80%.
The 9.9-point gap between actual (80.1%) and target (90%) first response SLA is a queue problem. Review dispatcher workflows, auto-assignment rules, and ticket routing. Client J at 43.2% first response needs a dedicated look at whether their tickets are being routed correctly or sitting in a backlog.
100% CSAT with 92.3% FR and 97.5% Res on 2,646 alerts is the gold standard in this dataset. Identify what makes this account different: dedicated engineer, proactive maintenance, smaller scope, or better-tuned monitors. Package those findings as a playbook for the three struggling accounts (C, G, J).
SmileBack uses a 3-point scale: negative (-1), neutral (0), and positive (+1). The CSAT positive rate is the percentage of responses that scored +1. An average rating of 0.877 translates to 87.7% positive. This is different from a 5-star scale -- there is no middle ground between "fine" and "good."
Bridge_All_Companies is a cross-source bridge table in the Proxuma data model. It maps the same client across Autotask, Datto RMM, and SmileBack by company ID. Without it, you cannot join alert data from RMM with ticket data from PSA and satisfaction data from SmileBack in a single query.
Some clients value outcome quality over response speed. Client J may have a less time-sensitive workload, or the relationship manager sets expectations well. That said, 43.2% first response is a risk -- one bad incident could shift satisfaction fast. The SLA gap should still be addressed.
The portfolio average is 2.0 alerts per ticket. A ratio above 3.0 usually means monitors are too sensitive and creating noise. Below 1.5 suggests good threshold tuning. The ideal depends on the client's environment size, but anything above 4.0 warrants an alert hygiene review.
Monthly for tracking SLA and CSAT trends. After any alert tuning exercise, review within 2 weeks to verify the changes had the expected impact. Quarterly for the full triangle analysis, especially when preparing for client business reviews.
Yes. Copy any query from the toggles above and paste it into DAX Studio or the Power BI Desktop performance analyzer. The queries reference standard Proxuma data model tables and measures that exist in every Proxuma Power BI deployment.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started