Cross-referencing N-able RMM device data with Autotask ticket volume to identify which clients generate the most work and whether device count is a reliable predictor. RMM PSA
Cross-referencing N-able RMM device data with Autotask ticket volume to identify which clients generate the most work and whether device count is a reliable predictor. RMM PSA
The data covers the full scope of Autotask PSA records relevant to this analysis, broken down by the key dimensions your team needs for day-to-day decisions and client reporting.
Who should use this: Service desk managers, dispatch leads, and operations teams
How often: Daily for queue management, weekly for trend analysis, monthly for capacity planning
Cross-referencing N-able RMM device data with Autotask ticket volume to identify which clients generate the most work and whether device count is a reliable predictor. RMM PSA
N-able RMM device count per client, ranked from highest to lowest. 224 devices across 109 customers, with the top 12 shown below.
EVALUATE TOPN(10, SUMMARIZECOLUMNS('BI_Datto_Rmm_Sites'[Name], "DeviceCount", CALCULATE(COUNTROWS('BI_Datto_Rmm_Devices')), "OnlineDevices", CALCULATE(COUNTROWS(FILTER('BI_Datto_Rmm_Devices', 'BI_Datto_Rmm_Devices'[Online] = TRUE()))), "AlertCount", CALCULATE(COUNTROWS('BI_Datto_Rmm_Alerts'))), [DeviceCount], DESC)
Autotask ticket creation and completion for the top 12 clients by volume. Created vs completed comparison shows resolution throughput.
EVALUATE
TOPN(
12,
ADDCOLUMNS(
SUMMARIZE(
BI_Autotask_Tickets,
BI_Autotask_Tickets[company_name]
),
"TicketsCreated", CALCULATE(
COUNT(BI_Autotask_Tickets[ticket_id])
),
"TicketsCompleted", CALCULATE(
COUNT(BI_Autotask_Tickets[ticket_id]),
BI_Autotask_Tickets[status_name] = "Complete"
)
),
[TicketsCreated], DESC
)
ORDER BY [TicketsCreated] DESC
Cross-referencing which clients appear in both the RMM device list and the top ticket generators. Overlap reveals whether device count predicts ticket volume.
| Client | Devices | Tickets Created | Tickets/Device | Overlap |
|---|---|---|---|---|
| Client A | 28 | 5,290 | 188.9 | Both lists |
| Client M | N/A | 6,381 | N/A | Tickets only |
| Client N | N/A | 5,458 | N/A | Tickets only |
| Client B | 21 | N/A | N/A | Devices only |
| Client C | 18 | N/A | N/A | Devices only |
| Client O | N/A | 2,775 | N/A | Tickets only |
| Client D | 15 | N/A | N/A | Devices only |
Client A is the only client in both lists. With 28 devices and 5,290 tickets, that works out to roughly 189 tickets per device. This is a strong signal that their device health is directly linked to service desk workload. The remaining top ticket generators (Client M, Client N, Client O) do not appear in the RMM top 12 at all, which means their ticket volume originates from other sources: user requests, software issues, or onboarding tasks rather than device failures.
How effectively are tickets being resolved across the top clients? Completion rate = completed / created.
| Client | Created | Completed | Open | Rate |
|---|---|---|---|---|
| Client P | 2,364 | 2,364 | 0 | 100.0% |
| Client V | 1,684 | 1,684 | 0 | 100.0% |
| Client A | 5,290 | 5,250 | 40 | 99.2% |
| Client N | 5,458 | 5,393 | 65 | 98.8% |
| Client M | 6,381 | 6,268 | 113 | 98.2% |
| Client O | 2,775 | 2,742 | 33 | 98.8% |
| Client Q | 2,376 | 2,356 | 20 | 99.2% |
| Client R | 2,180 | 2,155 | 25 | 98.9% |
| Client S | 1,803 | 1,783 | 20 | 98.9% |
| Client T | 1,758 | 1,745 | 13 | 99.3% |
| Client U | 1,728 | 1,692 | 36 | 97.9% |
| Client W | 1,629 | 1,611 | 18 | 98.9% |
EVALUATE
ADDCOLUMNS(
SUMMARIZE(
BI_Autotask_Tickets,
BI_Autotask_Tickets[company_name]
),
"TicketsCreated", CALCULATE(
COUNT(BI_Autotask_Tickets[ticket_id])
),
"TicketsCompleted", CALCULATE(
COUNT(BI_Autotask_Tickets[ticket_id]),
BI_Autotask_Tickets[status_name] = "Complete"
),
"CompletionRate", DIVIDE(
CALCULATE(
COUNT(BI_Autotask_Tickets[ticket_id]),
BI_Autotask_Tickets[status_name] = "Complete"
),
CALCULATE(COUNT(BI_Autotask_Tickets[ticket_id]))
)
)
ORDER BY [CompletionRate] DESC
Clients that break the expected pattern between device count and ticket volume
Client M leads all ticket volume with 6,381 tickets but does not appear in the N-able device top 12. This means their tickets are not driven by device health issues. Possible explanations: Client M may use a different RMM tool, their devices may not be onboarded to N-able, or their ticket volume comes from user-driven requests (password resets, software installs, onboarding) rather than hardware or monitoring alerts.
Client N follows the same pattern with 5,458 tickets and no presence in the device list. Two of your three largest ticket generators are invisible in your RMM data. That is a blind spot worth investigating.
On the other side, Client B has 21 devices but does not appear in the top ticket generators. This could be a positive signal: their devices are well-managed and healthy, generating few reactive tickets. Or it could mean their tickets are logged under a different company name in Autotask.
The takeaway: device count alone does not predict ticket volume. The correlation exists for Client A, but the top two ticket generators have no meaningful RMM footprint in this dataset. Cross-source data matching between N-able and Autotask needs attention.
The only client appearing in both the device top 12 and ticket top 12. At 189 tickets per device, their device health is likely a significant driver of service desk workload. Prioritize proactive maintenance and device replacement for this account.
Client M (6,381 tickets) and Client N (5,458 tickets) are your two busiest accounts but do not appear in the N-able device data. Either their devices are not onboarded to N-able, they use a separate monitoring tool, or the company name mapping between systems is off. You cannot correlate device health to tickets if the data is not connected.
Every client in the top 12 has a ticket completion rate of 97.9% or higher. The service desk is resolving tickets effectively. Client P and Client V sit at a perfect 100%. The bottleneck is not resolution capacity; it is the volume of tickets being created in the first place.
4 priorities based on the findings above
With 28 devices generating 5,290 tickets, something is consistently breaking. Pull the N-able alert history for Client A and cross-reference with their most common ticket categories. Look for aging hardware, repeated patch failures, or devices that trigger the same alert weekly. A targeted hardware refresh or policy change could cut their ticket volume by 20-30%.
Your two highest-volume clients are invisible in RMM. Check whether their devices are monitored through a different tool, or if they were never onboarded to N-able. If they are onboarded but the company name does not match Autotask, fix the mapping. Without this link, you cannot do any proactive device management for 11,839 tickets worth of workload.
The limited overlap between the two lists may partly be a data quality issue. If "Client B" in N-able is listed as "Client B Corp" in Autotask, the correlation breaks. Run a name-matching audit between both systems and establish a single naming convention. This is a prerequisite for any meaningful cross-source analysis.
Client B has 21 devices but does not appear in the top ticket generators. If their devices are genuinely healthy and well-maintained, document what is different about their setup: patching schedule, hardware age, monitoring policies. Apply those practices to Client A's fleet as a benchmark.
Device data is pulled from N-able RMM through the Proxuma Power BI connector. The connector syncs device records including customer name, device type, and health status. The AI then runs DAX queries to count devices per customer and cross-reference with Autotask ticket data.
Several reasons: the client may not have devices onboarded in N-able, they may use a different RMM tool, or the company name in N-able does not match the Autotask record. A name-matching audit between both systems will identify which scenario applies.
Tickets per device is a rough measure of how much service desk work each device generates. A high ratio (like Client A's 189 tickets per device) suggests the devices are unhealthy, aging, or misconfigured. A low ratio means the devices are stable and well-managed. It is not a perfect metric because not all tickets are device-related, but it gives you a starting point for investigation.
Start by standardizing company names across N-able and Autotask. Then onboard all client devices into N-able. Once both systems use the same naming and all devices are tracked, the cross-source analysis becomes much more reliable. You can also tag Autotask tickets that originate from N-able alerts to separate device-driven tickets from user requests.
Yes. Connect Proxuma Power BI to your N-able RMM and Autotask accounts, add an AI tool (Claude, ChatGPT, or Copilot) via MCP, and ask the same question. The AI writes the DAX queries, runs them against your real data, and produces a report like this in under fifteen minutes.
Connect Proxuma Power BI to your PSA, RMM, and M365 environment, use an MCP-compatible AI to ask questions, and generate custom reports - in minutes, not days.
See more reports Get started