Total response volume, per-client breakdown, collection gaps, and where SmileBack might not be reaching your end users. Generated by AI via Proxuma Power BI MCP server.
Total response volume, per-client breakdown, collection gaps, and where SmileBack might not be reaching your end users. Generated by AI via Proxuma Power BI MCP server.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service managers, account managers, and MSP leadership tracking customer experience
How often: Weekly for trend monitoring, monthly for team reviews, quarterly for QBRs
Total response volume, per-client breakdown, collection gaps, and where SmileBack might not be reaching your end users. Generated by AI via Proxuma Power BI MCP server.
EVALUATE ROW(
"TotalReviews", COUNTROWS('BI_SmileBack_Reviews'),
"TotalRatings", [CSAT - Total Ratings],
"TotalRatingsLastMonth", [CSAT - Total Ratings - Last Month],
"TotalRatingsLastYear", [CSAT - Total Ratings - Last Year],
"DistinctCompanies", DISTINCTCOUNT('BI_SmileBack_Reviews'[company_id]),
"DistinctContacts", DISTINCTCOUNT('BI_SmileBack_Reviews'[contact_id])
)
All clients with SmileBack responses, ranked by total response count. Horizontal bars show relative volume.
| Month | Responses | Positive | Negative | CSAT % |
|---|---|---|---|---|
| 2026-01 | 118 | 110 | 2 | 93.2% |
| 2025-12 | 142 | 122 | 15 | 85.9% |
| 2025-11 | 151 | 131 | 15 | 86.8% |
| 2025-10 | 182 | 159 | 18 | 87.4% |
| 2025-09 | 166 | 146 | 12 | 88.0% |
| 2025-08 | 125 | 111 | 9 | 88.8% |
| 2025-07 | 139 | 125 | 11 | 89.9% |
| 2025-06 | 135 | 111 | 17 | 82.2% |
| 2025-05 | 120 | 105 | 11 | 87.5% |
| 2025-04 | 126 | 106 | 17 | 84.1% |
| 2025-03 | 169 | 158 | 6 | 93.5% |
| 2025-02 | 128 | 120 | 7 | 93.8% |
EVALUATE TOPN(12,
GROUPBY(
ADDCOLUMNS(
FILTER('BI_SmileBack_Reviews', NOT(ISBLANK('BI_SmileBack_Reviews'[rated_on]))),
"YM", LEFT('BI_SmileBack_Reviews'[rated_on], 7)
),
[YM],
"Reviews", COUNTX(CURRENTGROUP(), 'BI_SmileBack_Reviews'[proxuma_source_id]),
"Positive", SUMX(CURRENTGROUP(), IF('BI_SmileBack_Reviews'[rating]=1, 1, 0)),
"Negative", SUMX(CURRENTGROUP(), IF('BI_SmileBack_Reviews'[rating]=-1, 1, 0))
),
[YM], DESC
)
ORDER BY [YM] DESC
How survey responses are distributed across the client base, and why that matters for data quality
The most important finding in this dataset is the extreme concentration. Client A accounts for 75.5% of all responses (7,688 out of 10,178). This likely means Client A is your internal organization or your largest managed client. If you remove Client A, the remaining 14 clients produced just 2,490 responses total.
Seven clients have fewer than 50 responses each. At that volume, a handful of bad interactions can swing the average significantly. Client I stands out: only 59 responses, but a 0.525 average rating means nearly half of those are unhappy. That is a data quality problem and a service problem in the same account.
EVALUATE
VAR _Total = COUNTROWS(BI_SmileBack_Reviews)
RETURN
ADDCOLUMNS(
SUMMARIZE(
BI_SmileBack_Reviews,
BI_Autotask_Companies[company_name]
),
"ResponseCount", COUNTROWS(BI_SmileBack_Reviews),
"ShareOfTotal", DIVIDE(
COUNTROWS(BI_SmileBack_Reviews), _Total),
"HappyCount", CALCULATE(
COUNTROWS(BI_SmileBack_Reviews),
BI_SmileBack_Reviews[rating] = 1),
"UnhappyCount", CALCULATE(
COUNTROWS(BI_SmileBack_Reviews),
BI_SmileBack_Reviews[rating] = 0)
)
ORDER BY [ResponseCount] DESC
Clients with fewer than 50 responses. Their CSAT averages are directional at best and should not drive account decisions without more data.
| Client | Responses | Avg Rating | Happy % | Reliability | Risk Flag |
|---|---|---|---|---|---|
| Client K | 46 | 0.826 | 91.3% | Medium | Approaching threshold |
| Client L | 45 | 0.600 | 77.8% | Medium | Low satisfaction + low volume |
| Client M | 44 | 0.750 | 84.1% | Medium | Below average |
| Client N | 42 | 0.810 | 88.1% | Medium | Approaching threshold |
| Client O | 42 | 0.881 | 88.1% | Medium | Healthy |
EVALUATE
FILTER(
ADDCOLUMNS(
SUMMARIZE(
BI_SmileBack_Reviews,
BI_Autotask_Companies[company_name]
),
"ResponseCount", COUNTROWS(BI_SmileBack_Reviews),
"AvgRating", AVERAGE(BI_SmileBack_Reviews[rating]),
"HappyPct", DIVIDE(
CALCULATE(COUNTROWS(BI_SmileBack_Reviews),
BI_SmileBack_Reviews[rating] = 1),
COUNTROWS(BI_SmileBack_Reviews))
),
[ResponseCount] < 50
)
ORDER BY [ResponseCount] DESC
The split between happy (rating = 1) and unhappy (rating = 0) responses per client. SmileBack uses a binary scale: smiley face or frown.
| Client | Total | Happy | Unhappy | Happy % | Distribution |
|---|---|---|---|---|---|
| Client A | 7,688 | 7,196 | 492 | 93.6% | |
| Client B | 384 | 338 | 46 | 88.0% | |
| Client C | 382 | 323 | 59 | 84.6% | |
| Client I | 59 | 43 | 16 | 72.9% | |
| Client L | 45 | 35 | 10 | 77.8% |
EVALUATE
ADDCOLUMNS(
SUMMARIZE(
BI_SmileBack_Reviews,
BI_Autotask_Companies[company_name]
),
"TotalCount", COUNTROWS(BI_SmileBack_Reviews),
"HappyCount", CALCULATE(
COUNTROWS(BI_SmileBack_Reviews),
BI_SmileBack_Reviews[rating] = 1),
"UnhappyCount", CALCULATE(
COUNTROWS(BI_SmileBack_Reviews),
BI_SmileBack_Reviews[rating] = 0),
"HappyPct", DIVIDE(
CALCULATE(COUNTROWS(BI_SmileBack_Reviews),
BI_SmileBack_Reviews[rating] = 1),
COUNTROWS(BI_SmileBack_Reviews))
)
ORDER BY [TotalCount] DESC
You have collected 10,178 SmileBack responses across your client base. That sounds like a solid dataset until you look at the distribution. Client A alone accounts for 7,688 of those responses, which is 75.5% of your total volume. If Client A is your internal organization, then your external client CSAT data is built on just 2,490 responses spread across 14 companies.
The estimated response rate of 15.1% (10,178 responses against 67,521 total tickets) is on the lower end of the typical MSP range. SmileBack benchmarks suggest 15-25% response rates are normal, with well-configured setups reaching 30% or higher. A low response rate does not mean clients are unhappy. It often means the survey is not reaching the right person, or the timing of the survey email does not match when the end user checks their inbox.
Client I has the worst satisfaction score in the dataset at 0.525, meaning nearly half of their 59 responses were unhappy. That is a small sample, but the pattern is clear enough to act on. Client L at 0.600 with 45 responses tells a similar story. Both of these accounts need attention before the next QBR.
On the other end, Client G stands out with a 97.0% happy rate across 66 responses. That is an excellent result, and the volume is high enough to be meaningful. Client D (93.7% happy, 142 responses) and Client E (93.3% happy, 104 responses) are also solid performers with reliable sample sizes.
The biggest gap in this data is what you cannot see: clients who submit tickets but never generate a SmileBack response. Those accounts do not appear in this dataset at all. If you have 20 active clients but only 15 show up with responses, those missing 5 are your blind spot.
5 priorities based on the findings above
Client I has a 0.525 average rating across 59 responses, meaning 27% of their feedback is unhappy. Client L sits at 0.600 with 45 responses. Both are below the portfolio average and below the 50-response threshold for statistical confidence. Pull the actual unhappy tickets for these clients and look for patterns: same technician, same ticket category, same SLA misses. A targeted fix at the root cause will do more than a follow-up call.
If you have active clients in Autotask that do not appear in this report at all, SmileBack is either not configured for their ticket queues or the survey emails are going to the wrong contact. Check the SmileBack settings for every active company. A client with 200 closed tickets per year and zero survey responses is a complete blind spot in your satisfaction monitoring.
Clients K through O each have between 42 and 46 responses. That is close to the 50-response threshold where averages start to become statistically reliable. Check whether surveys are being sent to end users or only to the primary contact. Sending surveys to the actual person who submitted the ticket typically doubles response rates. Even reaching 60-80 responses per client per year makes the data usable for account decisions.
Client A contributes 75.5% of all responses. If this is your internal organization, its volume will dominate every portfolio-wide average and make your overall numbers look better (or worse) than external reality. Create a filtered view that excludes Client A to see your true external CSAT performance. Your external happy rate without Client A drops from 92.2% to the average of the remaining 14 clients.
Client G has a 97.0% happy rate across 66 responses. Client D sits at 93.7% with 142 responses. Client E at 93.3% with 104 responses. These are not flukes: the sample sizes are large enough and the satisfaction rates are consistently high. Ask these clients for a testimonial or a reference call with a prospect. Data-backed satisfaction numbers close deals faster than feature lists.
SmileBack sends a one-click satisfaction survey via email when a ticket is closed in Autotask. The customer clicks a happy face (rating = 1) or an unhappy face (rating = 0). No login required, no multi-step form. The simplicity is what drives response rates. Proxuma Power BI pulls these responses through the SmileBack connector and links them to the matching Autotask company for reporting.
SmileBack benchmarks show that a typical MSP sees 15-25% of closed tickets generate a survey response. Well-configured setups where surveys go to the actual end user (not just the primary contact) can reach 30% or higher. If you are below 15%, check your survey delivery settings. The survey email timing, subject line, and recipient all affect response rates.
As a practical rule for MSPs, 30 responses per client is the minimum for a directionally useful average. At 50+ responses, the average becomes statistically stable enough to use in QBR conversations. Below 30, a single bad week can swing the score by 10 percentage points or more. This report flags clients below these thresholds so you know which numbers to trust.
This is common in MSP datasets. The largest client (or your internal organization) generates the most tickets and therefore the most survey opportunities. If that client also has a higher response rate, the effect compounds. Always look at external client metrics separately to get an accurate picture of your service delivery to paying customers.
Yes. Connect Proxuma Power BI to your SmileBack and Autotask accounts, add an AI tool (Claude, ChatGPT, or Copilot) via MCP, and ask the same question. The AI writes the DAX queries, runs them against your real data, and produces a report like this in under fifteen minutes. Your actual client names, ticket volumes, and response counts replace the demo data.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started