Generated by AI via Proxuma Power BI MCP server. Total ratings, distribution, monthly rate, and industry benchmark comparison for SmileBack-integrated MSPs.
Generated by AI via Proxuma Power BI MCP server. Total ratings, distribution, monthly rate, and industry benchmark comparison for SmileBack-integrated MSPs.
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service managers, account managers, and MSP leadership tracking customer experience
How often: Weekly for trend monitoring, monthly for team reviews, quarterly for QBRs
Generated by AI via Proxuma Power BI MCP server. Total ratings, distribution, monthly rate, and industry benchmark comparison for SmileBack-integrated MSPs.
EVALUATE ROW("CSATAvg", [CSAT - Average Rating], "Ratings", [CSAT - Total Ratings], "CSATLastYear", [CSAT - Average Rating - Last Year])
SmileBack uses a three-value scale: 1 = Happy, 0 = Neutral, -1 = Unhappy. Distribution based on 10,178 all-time reviews.
The 87.7% positive rate is a strong result. For context, SmileBack considers 80%+ to be a healthy baseline for MSPs. Every unhappy response is worth reviewing: SmileBack links each rating back to the originating ticket in Autotask, so you can trace exactly which ticket generated a negative response and who handled it.
The neutral category (7.5%) often gets overlooked, but 763 neutral responses represent clients who were not unhappy enough to complain but not satisfied enough to endorse your service. That is a recoverable group if you follow up.
EVALUATE
ADDCOLUMNS(
SUMMARIZE(
'BI_SmileBack_Reviews',
'BI_SmileBack_Reviews'[rating]
),
"Count", CALCULATE(COUNTROWS('BI_SmileBack_Reviews')),
"Percentage", DIVIDE(
CALCULATE(COUNTROWS('BI_SmileBack_Reviews')),
CALCULATE(COUNTROWS('BI_SmileBack_Reviews'), ALL('BI_SmileBack_Reviews'[rating]))
)
)
ORDER BY 'BI_SmileBack_Reviews'[rating] DESC
Putting 10,178 total ratings and 1,475 last-year ratings into context
The gap between the all-time total (10,178) and last year (1,475) tells you the data spans several years. If we divide 10,178 by 123 per month, that implies roughly 82 months of history, or just under 7 years. That is a meaningful longitudinal dataset. Year-over-year, positive ratings jumped from 78.8% to 87.7%, an 8.9 percentage point improvement that is unlikely to be accidental.
EVALUATE
ADDCOLUMNS(
SUMMARIZE(
CALCULATETABLE(
'BI_SmileBack_Reviews',
DATESINPERIOD(
'BI_SmileBack_Reviews'[review_date],
LASTDATE('BI_SmileBack_Reviews'[review_date]),
-12, MONTH
)
),
"YearMonth", FORMAT('BI_SmileBack_Reviews'[review_date], "YYYY-MM")
),
"Total_Reviews", CALCULATE(COUNTROWS('BI_SmileBack_Reviews')),
"Happy_Reviews", CALCULATE(
COUNTROWS('BI_SmileBack_Reviews'),
'BI_SmileBack_Reviews'[rating] = 1
),
"Pct_Happy", DIVIDE(
CALCULATE(COUNTROWS('BI_SmileBack_Reviews'),
'BI_SmileBack_Reviews'[rating] = 1),
CALCULATE(COUNTROWS('BI_SmileBack_Reviews'))
)
)
ORDER BY [YearMonth] ASC
How does 123 responses per month compare to what other MSPs achieve with SmileBack?
At 123 responses per month, your volume is well above the SmileBack floor of 15–20 per tech per month, assuming a team of 6–8 technicians. The more interesting benchmark is the positive rate: 87.7% sits nearly 8 points above the typical industry average of ~80%. That is a meaningful gap that suggests either strong service delivery, a well-selected client base, or both.
One note: response rate (how many surveys get answered relative to how many are sent) is different from response volume. SmileBack does not always surface response rate directly, but you can estimate it by comparing total ratings against total closed tickets in the same period via Autotask data in Power BI.
EVALUATE
VAR _LastYear_Reviews = CALCULATE(
COUNTROWS('BI_SmileBack_Reviews'),
DATESINPERIOD(
'BI_SmileBack_Reviews'[review_date],
LASTDATE('BI_SmileBack_Reviews'[review_date]),
-1, YEAR
)
)
VAR _LastYear_Closed = CALCULATE(
COUNTROWS('BI_Autotask_Tickets'),
'BI_Autotask_Tickets'[Status] = "Complete",
DATESINPERIOD(
'BI_Autotask_Tickets'[completedDate],
LASTDATE('BI_Autotask_Tickets'[completedDate]),
-1, YEAR
)
)
RETURN
ROW(
"Reviews_LastYear", _LastYear_Reviews,
"Closed_Tickets_LastYear", _LastYear_Closed,
"Est_Response_Rate", DIVIDE(_LastYear_Reviews, _LastYear_Closed),
"Opted_Out_Contacts", CALCULATE(
COUNTROWS('BI_Autotask_Contacts'),
'BI_Autotask_Contacts'[opted_out_from_surveys] = TRUE()
)
)
What this data means for your service delivery
At 123 reviews per month and 10,178 all-time, you have enough data to segment by client, technician, ticket type, and time period without running into sample size problems. Most CSAT programs fail not because of low scores but because of low volume. Yours does not have that problem.
A jump from 78.8% to 87.7% positive year over year is too large to dismiss as noise. Something changed: possibly a staffing change, a process improvement, or a shift in the client base. Segmenting the data by technician and client for both periods would help identify where the improvement came from, and whether it is holding.
Each unhappy rating in SmileBack links to a specific Autotask ticket. If you are not running a systematic follow-up for negative ratings, you are leaving client recovery opportunities on the table. A basic process: filter BI_SmileBack_Reviews for rating = -1, join to the ticket, assign a follow-up task in Autotask within 24 hours.
The BI_Autotask_Contacts[opted_out_from_surveys] field tracks contacts who will never receive a SmileBack request. If several key client contacts have opted out, your CSAT data may not reflect their experience at all. It is worth reviewing which clients have a high opt-out rate, particularly if those clients are also generating a lot of tickets.
SmileBack integrates directly with Autotask. When a ticket is marked as complete, SmileBack automatically triggers a one-click satisfaction request to the contact associated with that ticket. The client clicks Happy, Neutral, or Unhappy. No login required, no long form. That frictionless experience is why response rates tend to be higher than traditional surveys.
SmileBack stores ratings as numeric values in BI_SmileBack_Reviews[rating]: 1 means Happy (smiley face), 0 means Neutral, and -1 means Unhappy. When Proxuma Power BI calculates a positive percentage, it counts rows where rating = 1 and divides by total rows. This is the same method SmileBack uses in their own dashboard.
Yes. Each SmileBack review links to an Autotask ticket, and each ticket has an assigned resource. You can join BI_SmileBack_Reviews to BI_Autotask_Tickets via ticket_id, then group by the assigned resource to see each technician's positive rate. This is one of the most actionable CSAT analyses you can run in Proxuma Power BI.
SmileBack generally points to 15–20 responses per technician per month as a healthy baseline. At the team level, that translates to roughly 120–160 per month for a team of 8. The positive rate benchmark sits around 80%. Anything above 85% is genuinely strong. This MSP’s 87.7% puts it in the top tier of SmileBack users.
Contacts who have opted out of surveys will never receive a SmileBack request, regardless of how many tickets are closed for them. Proxuma tracks this via BI_Autotask_Contacts[opted_out_from_surveys]. If a client has several opted-out contacts and generates a lot of tickets, their experience will be invisible to your CSAT reporting. It is worth periodically reviewing which clients have the highest opt-out rates.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started