Analysis and reporting on csat sentiment distribution for managed service providers.
Analysis and reporting on csat sentiment distribution for managed service providers.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service managers, account managers, and MSP leadership tracking customer experience
How often: Weekly for trend monitoring, monthly for team reviews, quarterly for QBRs
Analysis and reporting on csat sentiment distribution for managed service providers.
SmileBack distills every customer interaction into a single click: happy, neutral, or unhappy. Across 3,024 closed tickets this quarter, 90.0% of responses came back positive. That number sits well above the MSP industry median of 85%, but the aggregate hides meaningful variation. The sections below break that number apart by client, by queue, and by technician to show where the experience holds up and where it quietly erodes.
EVALUATE ROW("CSATAvg", [CSAT - Average Rating], "Ratings", [CSAT - Total Ratings], "Positive", [CSAT - Positive], "Neutral", [CSAT - Neutral], "Negative", [CSAT - Negative])
When you rank clients by their happy-response percentage, a clear split emerges. The top five clients cluster between 91% and 96%, while the bottom two - Pinnacle Tech (78.6%) and Vanguard Tech (68.2%) - drag the portfolio average down. This is not a bell curve. It is bimodal: most clients are genuinely satisfied, and a small group is struggling with repeated negative experiences that a portfolio-level average would never surface.
Vanguard Tech stands out with 24.0% unhappy responses across 124 tickets. That is nearly six times the portfolio average. Drilling into their ticket history shows a concentration of unhappy ratings on network-related issues and a pattern of re-opened tickets. Pinnacle Tech follows at 13.0% unhappy, primarily driven by escalation tickets where first-contact resolution was not achieved.
Client Sentiment Breakdown =
ADDCOLUMNS(
SUMMARIZE( BI_SmileBack_Reviews, BI_SmileBack_Reviews[company_name] ),
"Total", CALCULATE( COUNTROWS( BI_SmileBack_Reviews ) ),
"Happy %", DIVIDE(
CALCULATE( COUNTROWS( BI_SmileBack_Reviews ), BI_SmileBack_Reviews[rating] = 5 ),
CALCULATE( COUNTROWS( BI_SmileBack_Reviews ) )
),
"Neutral %", DIVIDE(
CALCULATE( COUNTROWS( BI_SmileBack_Reviews ), BI_SmileBack_Reviews[rating] = 3 ),
CALCULATE( COUNTROWS( BI_SmileBack_Reviews ) )
),
"Unhappy %", DIVIDE(
CALCULATE( COUNTROWS( BI_SmileBack_Reviews ), BI_SmileBack_Reviews[rating] = 1 ),
CALCULATE( COUNTROWS( BI_SmileBack_Reviews ) )
)
)
Not all unhappy ratings carry the same signal. A negative rating on a password reset typically points to a slow or awkward interaction. A negative rating on a P1 escalation usually reflects frustration with the underlying problem, not the technician. Breaking sentiment down by service queue separates structural dissatisfaction from quality-of-service issues.
EVALUATE
SUMMARIZECOLUMNS(
BI_SmileBack_Surveys[survey_month],
"AvgScore", [SB - Average CSAT Score],
"ResponseCount", COUNTROWS(BI_SmileBack_Surveys),
"ResponseRate", [SB - Survey Response Rate %]
)
ORDER BY BI_SmileBack_Surveys[survey_month] ASC
Happy-response percentage has climbed steadily over the past six months, from 86.3% in November to 91.2% in April. That is a 4.9 percentage-point gain - meaningful in a metric that tends to plateau above 85%. The trendline below maps monthly happy % (green area) against unhappy % (red line) to show whether gains come from converting unhappy responses to neutral, or neutral to happy.
The unhappy line (dashed red) dropped from 5.8% to 3.6% over the same period. Most of the improvement came in March and April, coinciding with the introduction of mandatory callback protocols for neutral and unhappy responses. The gap between happy and unhappy is widening, which is the pattern you want to see - it means gains are real, not just noise from a smaller response pool.
Technician-level data reveals an interesting wrinkle: speed and satisfaction do not move in lockstep. Lisa Park has the fastest average handle time at 2.4 hours but ranks second-to-last on happy percentage at 82.4%. Sarah Mitchell, the top-rated technician at 95.7%, averages 4.2 hours per ticket. The implication is not that slower is better, but that thoroughness and communication matter more than raw speed when it comes to how the customer remembers the interaction.
Three clients in the portfolio have fewer than 10 CSAT responses this quarter. Their sentiment data is statistically unreliable and should not be used for benchmarking or QBR presentations. These clients either have low ticket volume, or their tickets are not triggering SmileBack surveys consistently.
| Client | Tickets Closed | CSAT Responses | Response Rate | Status |
|---|---|---|---|---|
| Atlas Corp | 42 | 6 | 14.3% | Low confidence |
| NovaBridge IT | 28 | 3 | 10.7% | Low confidence |
| Ironclad MSP | 18 | 0 | 0.0% | No data |
The gap between top and bottom performers is wider than expected. The bottom 20% scores more than 25 percentage points below the portfolio average, indicating structural issues that require targeted intervention.
Entities in the moderate risk category show a declining trend over the past quarter. Without intervention, 3-4 of these entities may shift to the high-risk category within 60 days.
The top 30% of the portfolio maintains stable performance above target, indicating current best practices are effective and can serve as a model for the rest.
1. Conduct a targeted review of all high-risk entities within 2 weeks. Document the root cause for each entity and create a remediation plan with clear deadlines and accountable owners.
2. Implement automated monitoring for the moderate-risk group. Set thresholds that trigger an alert when performance drops 5 percentage points below target, enabling early intervention before entities slip into high risk.
3. Schedule this report monthly as part of the QBR process. Use the trend data to verify that improvement initiatives are delivering measurable results across multiple quarters.
SmileBack uses a three-point scale: Happy (5), Neutral (3), and Unhappy (1). The survey is sent immediately after ticket closure and asks a single question. This simplicity drives high response rates compared to traditional 1–10 CSAT scales, but it also means you lose granularity. A “neutral” response could mean “fine but unremarkable” or “mildly frustrated but not enough to click unhappy.” Always follow up on neutrals to understand the underlying sentiment.
Escalation and P1 tickets involve business-impacting events - server outages, network failures, data loss risks. Customers are already frustrated before the ticket is opened. The survey captures the entire experience, not just the resolution. Even a technically perfect response can get an unhappy rating if the customer lost revenue during the downtime. This is why benchmarking escalation CSAT against routine tickets is misleading. Compare escalation CSAT against your own historical escalation CSAT instead.
The weighted average assigns the SmileBack numeric values: Happy = 5, Neutral = 3, Unhappy = 1. The formula is: (2,721 × 5 + 178 × 3 + 125 × 1) / 3,024 = 4.58. This single number is useful for trend tracking but masks the distribution. A score of 4.58 could come from 90% happy / 5.9% neutral / 4.1% unhappy (this report) or from 85% happy / 15% neutral / 0% unhappy. Always look at the distribution, not just the average.
Industry data from ConnectWise and Datto peer groups places the median MSP happy rate at 85%, with top-quartile performers at 92%+. This portfolio’s 90.0% sits between median and top quartile. The more useful benchmark is internal: track your own trend month over month and compare client-to-client within your own book of business. External benchmarks vary widely based on survey methodology, response rates, and client mix.
Send surveys immediately after ticket closure (within 5 minutes, not end-of-day batches). Make sure surveys go to the actual end user, not just the primary billing contact. Keep the survey to a single click - SmileBack already does this well. Close the feedback loop by telling clients what changed because of their input. MSPs that share “you said, we did” summaries in QBRs see 10–15% higher response rates.
Respond to all unhappy and neutral responses within 24 hours. A brief acknowledgment (“thank you for the feedback, here is what we are doing about it”) is enough. For happy responses, a short thank-you improves perceived relationship quality without adding significant workload. The act of responding to surveys increases future response rates by 10–15% and signals to clients that their feedback is not disappearing into a void.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started