Assessment case studies

How Adaptive Comparative Judgement further enhanced a commitment to assessment innovation and excellence

Written by Lisa Holloway | Jul 10, 2025 8:00:00 AM

Introduction

NCFE is an educational charity in vocational and technical learning. Just like RM, NCFE is a long established education organisation focused on enhancing learning and improving life chances.

Both organisations also share a deep passion and understanding of innovation and have worked together for a number of years in various capacities.

In this latest project, the RM Compare team worked with NCFE to try to approach a particular assessment challenge with a new holistic approach using Adaptive Comparative Judgement.

The NCFE used RM Compare to try to improve the way they were currently assessing applications for two competitions, namely the NCFE Assessment Innovation Fund and the NCFE Aspiration Awards.

Together we were able to show the transformative potential of this new approach by delivering significant improvements in efficiency, reliability and stakeholder engagement in two critical, live evaluation processes.

The challenge

NCFE sought to enhance their assessment processes, particularly for the Assessment Innovation Fund (AIF) and the Aspiration Awards shortlisting.

4 key areas for improvement were identified from the current process
Time-intensity
Reliability
Stakeholder Engagement
Diverse Item Standardisation

Project 1

Assessment Innovation Fund (AIF) evaluation

Project 2

Aspiration Awards shortlisting

The projects

NCFE implemented RM Compare’s Comparative Judgement (CJ) system for both the AIF and Aspiration Award’s processes.

The platform’s key features included:

  • Paired comparisons of applications
  • Automated rank ordering based on multiple judgements
  • User-friendly interface for judges
  • Comprehensive data analysis and reporting tools
Project 1: Assessment Innovation Fund (AIF) Evaluation

The NCFE Assessment Innovation Fund (AIF) is an initiative to support and develop innovative approaches to assessment in education. Its aims are to break boundaries in assessment practices, support and pilot new ideas for the future of assessment, build trust and confidence in innovative assessment methods, and add value to stakeholders including learners, educators, employers, and government.

40 applications received, plus 4 seed scripts from previous applications

61 volunteer judges, each tasked with making 10 judgements

Project 2: Aspiration Awards Shortlisting

The Aspiration Awards are about honouring the success of learners, apprentices, educators, support staff and educational organisations across the UK. There are multiple awards including Against All Odds, Apprentice of the Year, Centre of the Year and Learner of the Year.

  • 379 entries across six different categories
  • 41 volunteer judges distributed across categories
  • 10 judges for categories with many entries, 5 for those with fewer
  • Each judge set to make 50-60 judgements in their assigned category

The results

Result 1: Dramatic Time Saving

The CJ method substantially reduced the overall evaluation time in both trials. For the AIF trial, the time spent on evaluation decreased by 58%, despite more judges and more application reviews. Similarly, in the Aspiration Awards, the total time spent on shortlisting was reduced by 43%, again with significantly more reviews of each application and more judges. The additional eyes on each application gave teams a high degree of confidence in the process compared to previous methods.

Overall Savings
Combined Savings
across both processes
Based on average employee cost
Aspiration Awards Shortlisting
Total Evaluation
time
Set Up time
Reviews per
application
AIF Evaluation
Total Evaluation
time
Set Up time
Reviews per
application
Result 2: Enhanced stakeholder engagement

Feedback from stakeholders was overwhelmingly positive. Judges found the CJ process user-friendly and efficient. They appreciated the simplicity and the ability to manage their judgements around other commitments. The increased engagement, with more judges involved than in previous methods, contributed to a more inclusive and comprehensive evaluation process. Administrators also found the setup and management of CJ sessions straightforward, with time savings and significantly improved visibility of progress.

More Judges

Significantly more judges involved in both processes

Judge Diversity

Aspiration Award’s judges expanded beyond the communications team to include more diverse perspectives

User Friendly

Judges appreciated the user-friendly interface and flexibility

Flexibility

Ability to complete judgements around other work commitments

Admin Friendly

Administrators found set up and management straightforward

Handling Diverse Submissions

The comparative approach proved effective in evaluating a wide range of innovative and subjective entries. Holistic judgements allowed for nuanced evaluations beyond rigid criteria

Result 3: High reliability

The findings from both trials indicate significant improvements in time efficiency, reliability, and stakeholder satisfaction. Both trials demonstrated high reliability scores, indicating consistent and dependable judgements. In the AIF trial, the reliability score reached 0.73, which is considered acceptable given the breadth of applications. For the Aspiration Awards, the reliability scores were even higher, averaging above 0.80 across all categories. CJ provided direction to, and evidence for, the moderation processes for both activities, further enhancing reliability.

Results summary

Efficiency

Substantial time savings while increasing the depth of evaluation

Reliability

Consistently high reliability scores across different evaluation contexts

Enhanced Confidence

More reviews per application, robust moderated data and outlier detection

Engagement

Increased participation and positive feedback from stakeholders

Flexibility

Easily adaptable to different assessment scenarios

Data-Driven Insights

Comprehensive analytics for informed decision-making

Scalability

Demonstrated effectiveness across varying numbers of entries and judges

Cost Savings

Significant savings over existing processes

Conclusion

The encouraging results from this study point to a number of opportunities for further exploration and exploitation.

The principles underpinning the RM Compare system are well understood by NCFE, who will endeavour to release even more value to its staff, customers and users.

The RM Compare team looks forward to working with NCFE and other awarding organisations in the exciting domain of Adaptive Comparative Judgement.

Next steps

The implementation of RM Compare at NCFE has resulted in significant improvements across multiple dimensions of their assessment processes.

By leveraging ACJ technology, NCFE has set new standards in assessment efficiency and reliability, demonstrating the platform’s potential to revolutionise assessment practices across various sectors.

Work is ongoing to investigate how NCFE might use RM Compare technology to improve processes and add value to a number of scenarios including:

 

Recruitment

Efficient shortlisting of high-volume applications

Internal Awards

Streamlining processes for other internal awards

Decision Making

Project prioritisation, AI frameworks, and new product development

Procurement

Enhancing high-volume tender processes

Assessment solutions

Alternative to traditional criteria-based methods in formative and summative assessments