Body
The UK’s HE regulator has approved a new provider that will use AI tutors to teach master’s degree programs.
The London School of Innovation (LSI) is the first higher education institution in the UK – and possibly the first in the world – to be granted approval by its national regulator to teach degrees using AI tutors – raising questions around the world about the future role and focus of university staff, as well as the role of generative AI.
The Office For Students assessed the LSI as meeting all requirements for a degree-conferring institution last month. With enrolments dues to start in June, the UK institution will teach six Masters programs under the new model, with more planned in the near future.
Degree programs delivered by LSI will be oversighted by human academics, but the hard graft of knowledge transfer will be up to the AI bots in the first instance. The innovation of replacing aspects of traditional human-to-human teaching with a squadron of personalised AI bots available 24/7 is driven not by cost savings, according to LSI. Instead it revolves around a bold philosophical approach to re-thinking the HE model, led by Co-Founder and President Paymon Khamooshi.
Mr Khamooshi told Future Campus that AI tutors had been painstakingly designed to deepen engagement.
“AI does not automatically improve education. Poor pedagogy with AI is still poor pedagogy, only faster. The opportunity is different,” Mr Khamooshi said.
“AI makes it possible for students to engage more thanks to continuous explanation, practice and questioning with real-time formative feedback (rather than waiting for the next scheduled class or office hour), personalisation of examples and analogies for each student based on their unique background and interests, etc.
“It's the increased and deepened engagement that raises quality. And that's much more than making AI chatbots available to the students.
“At LSI it has taken a lot of design, software engineering, dynamic context injection, complex data architecture, etc that made it possible.”
LSI aims for AI to work for humans, rather than humans focusing on spending time keeping AI out.
“The model only works if academics remain central. At LSI, AI tutors support learning, but academic staff 'lead' the educational design, quality, assessment standards and student progress,” he said.
“Humans are still responsible for judgement, context, ethics, pastoral support and the harder conversations. In simple terms, we use AI for continuity and personalisation, and academics for standards and judgement.”
Curriculum is developed by humans working with AI in collaboration at every stage, using AI prompt templates in a multi-stage workflow, with the process streamlined by technology, not ruled by it.
At the heart of the School is a one-stop-shop Learning Management System, used to manage everything from curriculum development and personalised AI delivery to institutional governance.
The AI tutor is a far cry from the average Generative AI tools used by many students.
“Every student is paired with a 24/7 AI virtual tutor that learns their professional background, interests, and aspirations from day one, then tailors course material accordingly, adapting its language, drawing on personally relevant examples, and delivering content in whichever format suits the learner best,” the company states.
“Formative assessment is woven throughout the process, with instant feedback through Socratic questioning, simulations, and hands-on challenges that identify skill gaps and keep students engaged in real time.”
Originally founded as ‘Geeks Academy’ in 2013, the London School of Innovation now operates with a small staff from a campus in south-west London – and initial enrolments are anticipated to be modest.
Human lecturers will be introduced after the first year of operation, and the institution contends that the AI teachers will free staff up for more valuable human-to-human teaching time – shifting focus from delivering the same old lectures and tutorials each week to problem solving and guiding students in a more personalised way.
Mr Khamooshi said the rapid rise of AI as an effective teaching tool had left academic staff with a question of identity that needed to be resolved. In an article about the rapid change in academic roles, he argues that generative AI has left university staff with no choice but to change.
“For centuries, the academic has been teacher, examiner, and knowledge authority rolled into one,” Mr Khamooshi writes.
“AI is now unbundling that role with striking speed, handling explanation, practice feedback, and adaptive instruction at scale. But rather than diminishing the need for academics, this shift may reveal a more precise and more demanding function: intellectual arbitration. The question is whether institutions will redesign around this clarity or defend a model that AI is already quietly outperforming in parts.”
The new School believes AI should be integrated into assessment, ‘rather than treated mainly as contraband,” Mr Khamooshi told FC.
“Students will use AI in their professional lives, so assessment should test whether they can use it intelligently, critically and responsibly. That means assessing judgement, reasoning, application, originality of approach and the quality of human decision-making around AI outputs. The deeper question is not whether students used AI. It is whether they understand the domain well enough to question it, improve it and take responsibility for the result. That is where higher education still earns its keep.”
The new approach poses immediate questions about the future roles of academic staff, and the urgent need to reconsider the focus and accountabilities of staff.
The School uses the same platform for Learning Management as it does for Governance, describing it as an Automated Governance System. The assessors noted the strong focus on technology applications and innovation across the organisation.
LSI Director of Education Dr Paresh Kathrani has stated that some parts of the student journey needed to remain human-facing.
“It’s important to say plainly: there are real, credible gains from AI in areas where volume and consistency matter,” Dr Kathrani writes.
“Round-the-clock triage can route queries and draft basic responses, reducing waiting times and freeing staff for more complex, human-sensitive cases. Formative feedback tools can give immediate guidance on structure, clarity and basic comprehension checks, helping students iterate more often, particularly where teaching teams are stretched.
“Risk, though, clusters where automation touches life chances or identity: admissions decisions, disability support pathways, suspected misconduct, progression and termination, or employability signalling.”
What this new approach means for degree cost structures, management of AI in assessment and the value of human engagement in the learning process and educational experience of students is not yet clear. However, the emergence of the new approach and endorsement by the UK regulator will certainly drive a new wave of discussions globally across the sector – with a particular focus on the future role of university staff and integration of generative AI tools into education and / or assessment.
The future roles of academic staff and also institutional responses to AI will be among a range of critical topics to be discussed at the Future Campus HE People & Performance conference in June this year, hosted at RMIT University in Melbourne on June 22-23.
Learn More