“SmileBack CSAT Survey Response Volume Report”
Autotask PSA Datto RMM Datto Backup Microsoft 365 SmileBack HubSpot IT Glue All reports
AI-GENERATED REPORT
You searched for:

SmileBack CSAT Survey Response Volume Report

Generated by AI via Proxuma Power BI MCP server. Total ratings, distribution, monthly rate, and industry benchmark comparison for SmileBack-integrated MSPs.

Built from: SmileBack CSAT
How this report was made
1
Autotask PSA
Multiple data sources combined
2
Proxuma Power BI
Pre-built MSP semantic model, 50+ measures
3
AI via MCP
Claude or ChatGPT writes DAX queries, executes them, formats output
4
This Report
KPIs, breakdowns, trends, recommendations
Ready in < 15 min

SmileBack CSAT Survey Response Volume Report

Generated by AI via Proxuma Power BI MCP server. Total ratings, distribution, monthly rate, and industry benchmark comparison for SmileBack-integrated MSPs.

The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.

Who should use this: Service managers, account managers, and MSP leadership tracking customer experience

How often: Weekly for trend monitoring, monthly for team reviews, quarterly for QBRs

Time saved
Aggregating satisfaction data from survey tools and mapping it to clients takes hours. This report automates it.
Early warning
Declining satisfaction scores predict churn. Catching the trend early gives you time to act.
QBR material
Client-ready satisfaction data with trends and benchmarks for quarterly reviews.
Report categoryCSAT & Customer Satisfaction
Data sourceAutotask PSA · Datto RMM · Datto Backup · Microsoft 365 · SmileBack · HubSpot · IT Glue
RefreshReal-time via Power BI
Generation timeUnder 15 minutes
AI requiredClaude, ChatGPT or Copilot
AudienceService managers, account managers
Where to find this in Proxuma
Power BI › CSAT › SmileBack CSAT Survey Response Volume...
What you can measure in this report
Summary Metrics
Rating Distribution
Response Volume Analysis
Industry Benchmarks
Key Findings
Frequently Asked Questions
Total Ratings (All-Time)
Ratings Last Year
Avg Rating (All-Time)
YoY Rating Change
AI-Generated Power BI Report
SmileBack CSAT Survey Response Volume Report

Generated by AI via Proxuma Power BI MCP server. Total ratings, distribution, monthly rate, and industry benchmark comparison for SmileBack-integrated MSPs.

Demo Report: This report uses synthetic data to demonstrate AI-generated insights from Proxuma Power BI. The structure, DAX queries, and analysis reflect real MSP data patterns.
1.0 Summary Metrics
Total Ratings (All-Time)
10,178
CSAT 87.7%
Ratings Last Year
+9.4pp
From 78.3%
Avg Rating (All-Time)
87.7%
Positive (Happy = 1)
YoY Rating Change
+8.9pp
From 78.8% to 87.7%
What are these DAX queries? DAX (Data Analysis Expressions) is the formula language used by Power BI to query data. Each “View DAX Query” section shows the exact query the AI wrote and executed. You can copy any query and run it in Power BI Desktop against your own dataset.
View DAX Query — CSAT Volume Summary
EVALUATE ROW("CSATAvg", [CSAT - Average Rating], "Ratings", [CSAT - Total Ratings], "CSATLastYear", [CSAT - Average Rating - Last Year])
2.0 Rating Distribution

SmileBack uses a three-value scale: 1 = Happy, 0 = Neutral, -1 = Unhappy. Distribution based on 10,178 all-time reviews.

Happy (1)
87.7%
8,926 responses
Neutral (0)
7.5%
763 responses
Unhappy (-1)
4.8%
489 responses

The 87.7% positive rate is a strong result. For context, SmileBack considers 80%+ to be a healthy baseline for MSPs. Every unhappy response is worth reviewing: SmileBack links each rating back to the originating ticket in Autotask, so you can trace exactly which ticket generated a negative response and who handled it.

The neutral category (7.5%) often gets overlooked, but 763 neutral responses represent clients who were not unhappy enough to complain but not satisfied enough to endorse your service. That is a recoverable group if you follow up.

View DAX Query — Rating Distribution by Score
EVALUATE
ADDCOLUMNS(
    SUMMARIZE(
        'BI_SmileBack_Reviews',
        'BI_SmileBack_Reviews'[rating]
    ),
    "Count", CALCULATE(COUNTROWS('BI_SmileBack_Reviews')),
    "Percentage", DIVIDE(
        CALCULATE(COUNTROWS('BI_SmileBack_Reviews')),
        CALCULATE(COUNTROWS('BI_SmileBack_Reviews'), ALL('BI_SmileBack_Reviews'[rating]))
    )
)
ORDER BY 'BI_SmileBack_Reviews'[rating] DESC
3.0 Response Volume Analysis

Putting 10,178 total ratings and 1,475 last-year ratings into context

All-Time Volume
10,178
SmileBack reviews on record

At 87.7% positive, this translates to approximately 8,926 happy clients, 763 neutral, and 489 unhappy across all recorded history.
Last Year Volume
1,475
Reviews in most recent full year

That works out to ~123 per month on average. If you have 8 technicians, that is roughly 15 per tech per month, which matches the SmileBack benchmark floor.
Estimated Monthly Breakdown (Last Year)
Estimated monthly distribution (illustrative). Peak months shown in green. Avg = 123/month.

The gap between the all-time total (10,178) and last year (1,475) tells you the data spans several years. If we divide 10,178 by 123 per month, that implies roughly 82 months of history, or just under 7 years. That is a meaningful longitudinal dataset. Year-over-year, positive ratings jumped from 78.8% to 87.7%, an 8.9 percentage point improvement that is unlikely to be accidental.

View DAX Query — Monthly Response Volume (Last 12 Months)
EVALUATE
ADDCOLUMNS(
    SUMMARIZE(
        CALCULATETABLE(
            'BI_SmileBack_Reviews',
            DATESINPERIOD(
                'BI_SmileBack_Reviews'[review_date],
                LASTDATE('BI_SmileBack_Reviews'[review_date]),
                -12, MONTH
            )
        ),
        "YearMonth", FORMAT('BI_SmileBack_Reviews'[review_date], "YYYY-MM")
    ),
    "Total_Reviews", CALCULATE(COUNTROWS('BI_SmileBack_Reviews')),
    "Happy_Reviews", CALCULATE(
        COUNTROWS('BI_SmileBack_Reviews'),
        'BI_SmileBack_Reviews'[rating] = 1
    ),
    "Pct_Happy", DIVIDE(
        CALCULATE(COUNTROWS('BI_SmileBack_Reviews'),
            'BI_SmileBack_Reviews'[rating] = 1),
        CALCULATE(COUNTROWS('BI_SmileBack_Reviews'))
    )
)
ORDER BY [YearMonth] ASC
4.0 Industry Benchmarks

How does 123 responses per month compare to what other MSPs achieve with SmileBack?

Your MSP (last year)
123/month
123/mo
SmileBack benchmark (good)
40–60/month
40–60/mo
SmileBack benchmark (floor)
15–20/tech
15–20/tech
Positive rating (industry avg)
~80%
~80%
Your positive rating
87.7%
87.7%

At 123 responses per month, your volume is well above the SmileBack floor of 15–20 per tech per month, assuming a team of 6–8 technicians. The more interesting benchmark is the positive rate: 87.7% sits nearly 8 points above the typical industry average of ~80%. That is a meaningful gap that suggests either strong service delivery, a well-selected client base, or both.

One note: response rate (how many surveys get answered relative to how many are sent) is different from response volume. SmileBack does not always surface response rate directly, but you can estimate it by comparing total ratings against total closed tickets in the same period via Autotask data in Power BI.

View DAX Query — Response Rate vs Closed Tickets
EVALUATE
VAR _LastYear_Reviews = CALCULATE(
    COUNTROWS('BI_SmileBack_Reviews'),
    DATESINPERIOD(
        'BI_SmileBack_Reviews'[review_date],
        LASTDATE('BI_SmileBack_Reviews'[review_date]),
        -1, YEAR
    )
)
VAR _LastYear_Closed = CALCULATE(
    COUNTROWS('BI_Autotask_Tickets'),
    'BI_Autotask_Tickets'[Status] = "Complete",
    DATESINPERIOD(
        'BI_Autotask_Tickets'[completedDate],
        LASTDATE('BI_Autotask_Tickets'[completedDate]),
        -1, YEAR
    )
)
RETURN
ROW(
    "Reviews_LastYear", _LastYear_Reviews,
    "Closed_Tickets_LastYear", _LastYear_Closed,
    "Est_Response_Rate", DIVIDE(_LastYear_Reviews, _LastYear_Closed),
    "Opted_Out_Contacts", CALCULATE(
        COUNTROWS('BI_Autotask_Contacts'),
        'BI_Autotask_Contacts'[opted_out_from_surveys] = TRUE()
    )
)
5.0 Key Findings

What this data means for your service delivery

1

Response volume is strong enough to be statistically meaningful

At 123 reviews per month and 10,178 all-time, you have enough data to segment by client, technician, ticket type, and time period without running into sample size problems. Most CSAT programs fail not because of low scores but because of low volume. Yours does not have that problem.

2

The 8.9 percentage point improvement in positive ratings is worth investigating

A jump from 78.8% to 87.7% positive year over year is too large to dismiss as noise. Something changed: possibly a staffing change, a process improvement, or a shift in the client base. Segmenting the data by technician and client for both periods would help identify where the improvement came from, and whether it is holding.

3

489 unhappy responses need a closed-loop process

Each unhappy rating in SmileBack links to a specific Autotask ticket. If you are not running a systematic follow-up for negative ratings, you are leaving client recovery opportunities on the table. A basic process: filter BI_SmileBack_Reviews for rating = -1, join to the ticket, assign a follow-up task in Autotask within 24 hours.

4

Opted-out contacts reduce your coverage without you noticing

The BI_Autotask_Contacts[opted_out_from_surveys] field tracks contacts who will never receive a SmileBack request. If several key client contacts have opted out, your CSAT data may not reflect their experience at all. It is worth reviewing which clients have a high opt-out rate, particularly if those clients are also generating a lot of tickets.

FAQ Frequently Asked Questions
How does SmileBack send survey requests?

SmileBack integrates directly with Autotask. When a ticket is marked as complete, SmileBack automatically triggers a one-click satisfaction request to the contact associated with that ticket. The client clicks Happy, Neutral, or Unhappy. No login required, no long form. That frictionless experience is why response rates tend to be higher than traditional surveys.

What does the rating scale of -1, 0, 1 mean in Power BI?

SmileBack stores ratings as numeric values in BI_SmileBack_Reviews[rating]: 1 means Happy (smiley face), 0 means Neutral, and -1 means Unhappy. When Proxuma Power BI calculates a positive percentage, it counts rows where rating = 1 and divides by total rows. This is the same method SmileBack uses in their own dashboard.

Can I see which technician generates the most negative ratings?

Yes. Each SmileBack review links to an Autotask ticket, and each ticket has an assigned resource. You can join BI_SmileBack_Reviews to BI_Autotask_Tickets via ticket_id, then group by the assigned resource to see each technician's positive rate. This is one of the most actionable CSAT analyses you can run in Proxuma Power BI.

What is a good SmileBack response rate for an MSP?

SmileBack generally points to 15–20 responses per technician per month as a healthy baseline. At the team level, that translates to roughly 120–160 per month for a team of 8. The positive rate benchmark sits around 80%. Anything above 85% is genuinely strong. This MSP’s 87.7% puts it in the top tier of SmileBack users.

How do opted-out contacts affect my data?

Contacts who have opted out of surveys will never receive a SmileBack request, regardless of how many tickets are closed for them. Proxuma tracks this via BI_Autotask_Contacts[opted_out_from_surveys]. If a client has several opted-out contacts and generates a lot of tickets, their experience will be invisible to your CSAT reporting. It is worth periodically reviewing which clients have the highest opt-out rates.

Related Reports

Generate this report from your own data

Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.

See more reports Get started