
Piloting AR remote expert support is the best way to validate its value for your organisation. But to make informed decisions about scaling, you need to measure the right things. Vague impressions are not enoughâyou need data that demonstrates impact and guides improvement.
This article outlines the key metrics and KPIs to track during an AR remote expert support pilot, organised by category.
Resolution metrics
Resolution metrics measure how effectively AR remote support solves problems.
Resolution rate
What it measures: The percentage of AR support sessions that result in a resolved issue.
Why it matters: A high resolution rate indicates that AR remote support is effective at solving problems without escalation.
How to calculate: (Sessions resulting in resolution / Total sessions) Ă 100
Target: Aim for 70â85% resolution rate for most issue types.
First Time Right rate
What it measures: The percentage of sessions where the issue is resolved on the first attempt, without rework or repeat sessions.
Why it matters: High First Time Right indicates clear, accurate guidance from experts.
How to calculate: (Sessions resolved on first attempt / Total resolved sessions) Ă 100
Target: Aim for 65â80% First Time Right.
Escalation rate
What it measures: The percentage of sessions that cannot be resolved remotely and require expert site visits.
Why it matters: Lower escalation means more issues resolved without costly travel.
How to calculate: (Sessions escalated to site visit / Total sessions) Ă 100
Target: Aim to reduce escalation by 30â50% compared to pre-AR baseline.
Time metrics
Time metrics measure the speed of support and resolution.
Response time
What it measures: How quickly an expert joins an AR session after a technician requests help.
Why it matters: Fast response reduces technician waiting time and accelerates resolution.
How to calculate: Average time from session request to expert joining
Target: Aim for under 5 minutes for most requests.
Session duration
What it measures: How long AR support sessions last.
Why it matters: Shorter sessions indicate efficient guidance; very long sessions may indicate complex issues or inefficiencies.
How to calculate: Average duration of completed sessions
Benchmark: Typical sessions range from 10â30 minutes depending on issue complexity.
Mean Time To Repair (MTTR)
What it measures: Total time from issue identification to resolution for issues supported via AR.
Why it matters: MTTR is the ultimate measure of support speed and effectiveness.
How to calculate: Average time from fault detection to confirmed resolution
Target: Aim for 30â50% reduction compared to pre-AR baseline.
Efficiency metrics
Efficiency metrics measure resource utilisation and cost impact.
Expert utilisation
What it measures: How much of available expert time is spent on productive AR sessions.
Why it matters: High utilisation indicates efficient use of expert capacity; low utilisation may indicate over-staffing or demand issues.
How to calculate: (Time spent on sessions / Total available expert time) Ă 100
Benchmark: Aim for 40â70% utilisation depending on coverage model.
Site visits avoided
What it measures: The number of expert site visits avoided because issues were resolved remotely.
Why it matters: Avoided site visits represent direct cost savings (travel, time, opportunity cost).
How to calculate: Count of issues that would previously have required site visits but were resolved via AR
Target: Aim to avoid 30â50% of site visits compared to baseline.
Cost per resolution
What it measures: The average cost to resolve an issue via AR remote support.
Why it matters: Demonstrates the cost-effectiveness of AR support compared to alternatives.
How to calculate: Total AR support costs / Number of resolved issues
Compare to: Cost per resolution via phone support, site visits or other methods.
Quality and satisfaction metrics
Quality metrics measure the experience and outcomes for technicians and experts.
Technician satisfaction
What it measures: How satisfied technicians are with the AR support experience.
Why it matters: Satisfied technicians are more likely to use and benefit from AR support.
How to measure: Post-session surveys (e.g. 1â5 rating scale) or periodic feedback
Target: Aim for 4/5 or higher average satisfaction.
Expert feedback
What it measures: How experts perceive the effectiveness and usability of AR support.
Why it matters: Expert buy-in is essential for sustained success.
How to measure: Periodic surveys or feedback sessions
Knowledge capture rate
What it measures: The percentage of sessions that are recorded and/or converted into reusable knowledge (AR work instructions, documentation).
Why it matters: Knowledge capture multiplies the value of each session.
How to calculate: (Sessions captured or converted / Total sessions) Ă 100
Target: Aim to capture 80%+ of sessions for potential reuse.
Baseline comparison
All metrics are most meaningful when compared to a baseline. Before starting the pilot, document:
- Current resolution rates, times and escalation levels for target issue types
- Current costs for support via phone, site visits or other methods
- Current technician satisfaction with remote support
Use baseline data to demonstrate improvement and calculate ROI.
Reporting and decision-making
Track metrics throughout the pilot and summarise results at defined intervals (weekly or monthly). Present findings to stakeholders with clear comparisons to baseline.
Use pilot data to inform the scale decision:
- Strong results: Expand AR remote support to additional teams, sites or issue types
- Mixed results: Identify improvement areas, refine processes and consider extended pilot
- Weak results: Investigate root causes before further investment
Getting started
If you are planning an AR remote expert support pilot, define your metrics upfront and establish baseline data before launching. Consistent measurement enables confident decisions and demonstrates value.
Learn more about AR remote expert support for corrective maintenance or contact ActARion to discuss your pilot measurement approach.