Navbar button The Headteacher

What Will Tomorrow’s AI-Assisted Classrooms Look Like?

May 3, 2019, 8:06 GMT+1
Read in 9 minutes
  • An education sector that uses artificial intelligence could usher in a bright future, says Professor Rose Luckin – but we need to start making the right decisions now...
What Will Tomorrow’s AI-Assisted Classrooms Look Like?

Right now, we’re using artificial intelligence in ways many of us don’t realise. Every time we do a Google search, or use voice activated assistants like Siri or Alexa, we’re using AI. It’s become part of an infrastructure that most of use every day.

In education, the kind of AI systems currently being used fall into one of two categories. The first covers tutoring systems, which support students in an individualised way to help them learn specific subject areas. Companies like Century, for example, have developed artificially intelligent platforms through which various curricular subjects can be delivered to students in an individual way.

This flavour of AI is used to personalise instruction and tutoring to students’ specific needs, which can be very effective, but it isn’t the same as having a human teacher. I’m not of the belief that AI can or should replace human teachers, but I believe it can provide highly effective, efficient and individualised tutoring to complement human teachers, while giving immensely valuable feedback to those teachers about how students are performing.

The second main category is recommended systems. Similar to how services such as Amazon will recommend products to you – except more useful – companies such as Filtered will intelligently recommend resources that best suit the needs of educators and learners.

What is and isn’t AI?

I would define AI as a system where the computer learns. As it processes data about learners, teachers and educational interactions, it gradually adapts and improves over time. It’s therefore extremely important that teachers, leaders, managers and budget holders understand when, to be frank, they’re dealing with a snake oil salesman.

If a salesperson can’t explain to you precisely, in ways you can understand, what data their solution will collect, how it’s processed and how that processing will enable the AI to learn and adapt, don’t touch it.

An important thing to note is that AI’s real power is its invisibility. Picture an intelligent infrastructure underpinning everything we do. Try to then imagine how this intelligent infrastructure can be used to tell teachers everything they need to know about their students, in order provide them with the best possible human support. At the same time, that infrastructure can be telling students what they need to know about themselves, so that they can become better learners.

People might interact with this infrastructure through a voice-activated interface, a form of augmented reality or even via a robot – but it’s more helpful to think about the intelligence involved, rather than the interfaces used to access it.

Ethical algorithms

Given that AI, and machine learning in particular, is powered by data, we need to consider some vital questions. How is that data to be collected? Will the data be representative of the population for whom the technology is being developed? Will the data be stored in a safe, secure and private way? If so, will that data be processed using algorithms that are ‘ethically designed’ – i.e. built by people who understand the population diversity of where those algorithms will be deployed?

I’m a co-founder of the Institute for Ethical AI in Education, which was formed at the end of 2018 and is based at the University of Buckingham. Particularly now that we’re expecting people to embrace lifelong learning, education is fundamentally one of the most important interactions we want people to have, so we have to protect it. Ultimately, there will always be people wanting to do harm. There has to therefore be an element of education within the regulation we’ll need; people need to understand what they should and shouldn’t be worried about, and how to keep themselves safe.

Commercial sensitivity

It’s been interesting to observe how some big tech companies have already been taken to court over GDPR compliance failures, highlighting the need for any AI regulation to be properly policed – but you also have to work with companies and bring them on board. If you can embed a set of ethics within a company’s culture from the ground up, you’re more likely to end up with products and services that won’t fall foul of ethical regulations and guidelines.

It’s true that some companies will object to oversight of their activities, citing ‘commercial sensitivity’. I believe it’s justifiable for companies to maintain some degree of commercial privacy, but that shouldn’t prevent them from making what they’ve developed explainable and intelligible. And it doesn’t stop them from being absolutely clear as to whether their algorithms have been designed in an ethical way, and the degree to which they’re treating their data ethically.

I don’t accept commercial arguments as a ‘get out’ for not being ethical. We can and must insist that companies clearly explain their ethical processes and procedures, and provide evidence to show that they’re processing their data and training their employees responsibly.

The most important thing at the moment is for everybody to understand enough about AI to keep themselves safe. Everyone should feel confident enough to not simply give their data away, but to instead ask companies, why? Why do you want this information? What are you going to do with it? How do I know you’re going to keep it safe?

We also need those in charge of making decisions in this area to understand enough about AI to ensure that their decisions are sound. I was struck recently by the questioning of Facebook CEO Mark Zuckerberg by the US Congress, and how he wasn’t exactly probed robustly. We have to help everybody understand enough about AI to know what questions to ask, and to know when they’ve been given satisfactory answers.

20 years’ hence

Assuming the best case scenario, what might a classroom look like in 20 years’ time with sophisticated AI working in the background? If we get it right, teachers would have much more time to spend with students face-to-face. We could have a situation where the subject fundamentals students really need to understand are taught effectively and efficiently via AI tutoring systems, freeing up teachers for quality human interactions.

Those teachers will understand the needs of their students to a level of detail we’ve never seen before. Teachers and learners alike will be empowered by that intelligent infrastructure to interact in ways conducive to preparing students for a world that’s very different to ours; one that prizes collaborative problem solving, flexibility, mental agility and resilience in the face of big, technological transitions.

It may well be that the roles of ‘teacher’ and ‘learner’ are more fluid. Teachers might move through groups of learners in different ways, and we there may be a very different physical infrastructure within education. If people will be expected to learn all of the time, we could see schools, institutions and universities become discrete elements within a wider learning repertoire. We’ll be using AI to make ourselves more intelligent, not the other way round.

Conversely, my dystopian fear would be that we’re faced with institutions packed full of AI tutoring systems to the detriment of everything else; where kids are plugged into audio, screen or holographic interfaces for most of the time, with comparatively little human interaction (though there would be some minders present to ensure no one misbehaves). The result would be that they miss out on all that rich, human interaction because it’s ‘too expensive’.

I love technology, I love AI – But I don’t want our schools, universities and colleges to be dominated by technology. I want there to be a powerful, intelligent infrastructure that empowers humans to focus on improving human intelligence.

Rosemary Luckin is a UCL professor whose research involves blending theories from the learning sciences with techniques from AI; she is the author of Machine Learning and Human Intelligence: the Future of Education in the 21st Century and director of EDUCATE – a London hub for educational technology startups.