When an NYU professor turns to AI-powered oral exams to “fight fire with fire”, it says a lot about where assessment now finds itself. In his case, beautifully polished written assignments started reading less like student work and more like McKinsey memos – and some candidates struggled to defend that work in person.
His response was to revive oral exams at scale using an AI agent: live questioning about project choices, real-time reasoning around case studies, and AI-supported grading using a “council” of models. It’s a powerful reminder that in an AI-enabled world, traditional take-home essays and reports no longer reliably show who really understands the material.
Awarding and qualification bodies face the same dilemma:
Oral exams will be part of the answer in some contexts. They test live reasoning and make it harder to simply repeat generated content. But they are also resource intensive, stressful for many candidates, and not always scalable across large cohorts or diverse programmes.
That’s where robust content analysis can complement oral assessment rather than compete with it.
"In a world where the use of AI in education and assessment is ever growing, it's never been more important to establish originality in content. RM Echo utilises advanced data science techniques and proprietary algorithms to deliver high-value insights for users." Nick Hope, Product Manager for RM Echo.
RM Echo is designed to help educators and assessment designers understand how a piece of work was produced – not just how it reads.
If you’re rethinking assessment design in light of AI – whether for professional qualifications, higher education or vocational programmes – RM Echo can help you understand what’s really happening in candidate work and support your wider integrity strategy. Find out more here
References:
Lee Chong Ming, Business Insider, 5th Jan 2026 An NYU professor who hates that students' work reads like McKinsey memos held AI oral exams to 'fight fire with fire'