This article was written for SecEd magazine
Now that all pupils are back in school, the national debate has turned to the issue of “lost learning”. However, I am very uncomfortable with the terminology many are using – “lost learning”, “catch-up” – and all the rhetoric surrounding these concepts.
The problem with ‘lost learning’
Any talk of “loss” is to adopt a deficit model. I am not suggesting that Covid-19 has been a positive experience – of course, it has not. Many of us lost loved ones and many had difficult lockdown experiences – isolation, mental health challenges, domestic violence, anxiety and more.
But to adopt a deficit model is, in my opinion, to double-down on the problems. If we talk to pupils about their “lost learning”, about their “lack of progress”, and about “gaps” in their knowledge – as well as about the devastating consequences of the pandemic – we only serve to heighten anxieties and thus delay their return to “normality” and stunt their future progress.
Furthermore, how do we know if pupils have “lost” any learning at all? How are we quantifying this loss? Is such data accurate and is it helpful?
A study by the National Foundation for Educational Research (Sharp et al, 2020; SecEd, 2020) published earlier this month and based on a weighted sample of almost 3,000 school leaders and teachers across more than 2,200 mainstream primary and secondary schools in England, had a somewhat worrying conclusion: “Nearly all teachers estimate that their pupils are behind in their curriculum learning, with the average estimate being three months behind. Over half of teachers estimate that the learning gap between disadvantaged pupils and their peers has widened.”
The report also found that teachers in the most deprived schools were more than three times more likely to report that their pupils were four months or more behind in their curriculum learning than teachers in the least deprived schools.
I do not doubt the integrity of the researchers nor that of the school leaders and teachers who took part, but how accurate can this data be?
It all seems a little spurious to me. How are teachers quantifying lost learning – what is a month of learning, for example? – and are they all using the same metric?
Pupils may have “lost” a few months of classroom teaching and therefore they might not have been taught parts of the planned curriculum, and home-schooling experiences will undoubtedly have been varied (some pupils have excelled, others have struggled, and thanks to the digital divide many have not even been able to engage), but does this equate to “lost learning”?
Do we really know, at this stage, how effective home-schooling and online learning has been? Do we really know what pupils have retained and what they have “forgotten” over the course of lockdown? And do we know what effect missing the parts of the curriculum we did will have on pupils’ long-term learning and progress?
Do we actually mean ‘lost learning’ at all…
…or are we talking about something else entirely? Earlier, I placed the word “forgotten” in inverted commas because learning (and forgetting) is, as we know, a complex beast.
Paul von Hippel, an associate professor in the LBJ School of Public Affairs at the University of Texas, studied summer learning loss in 2019. He found that it might not be as pronounced as we first thought and may, in fact, not exist at all. He also concludes that we cannot presume learning loss just because pupils are not in school (von Hippel, 2019).
And writing in August, Professor Dylan Wiliam expressed his doubts over reports of lost learning too: “Psychologists who research memory, like Bjork, point out that how easy it is to retrieve something from memory is different from how well something has been learned – what they call retrieval strength and storage strength, respectively.
“When we test students at the beginning of a school year, we are testing retrieval strength, which, if the students have not been reviewing the material they learned in the previous year, will have declined over the summer.
“But how well something is learned – storage strength – does not decline, and restudying the material increases both retrieval strength and storage strength.
“In other words, what students can do on their first day back in school – whether face-to-face or online – is a poor gauge of what they have actually learned.”
More importantly, Prof Wiliam concluded, restudying material increases storage strength more when retrieval strength is low – so “an hour spent restudying material after the summer break will have more impact on long-term learning than the same time spent studying it before the summer break” (Wiliam, 2020).
I can attest to this. I studied French at GCSE. I did well but did not continue at A level. It is now 30 years since I last studied the language. However, during lockdown, I brushed up on my French using a language learning app.
I started with lessons in the basics, but after taking a few exercises I was quickly prompted to skip this stage. I found that my knowledge had not been “lost” – it was still stored in my long-term memory – but had instead become hard to retrieve, less readily available.
The content I was being tested on was familiar and much of it came back quickly with a little prompting. It was less learning loss, more learning decay.
I suspect it is the same for our pupils. They will not have lost learning during lockdown – what we taught them prior to lockdown will still be in there somewhere – rather, it will have decayed slightly and will require some retrieval practice to dust it down and make it more readily accessible.
With some retrieval practice approaches, many students could find themselves back on the same trajectory quickly. Lockdown could have served to help incubate their learning, to help forge new connections and develop more schemata, or at least provided some time for reflection.
So, “learning loss” implies something that was previously learned has now been lost. However, what is more likely is that pupils have suffered an opportunity cost – pupils have missed out on the opportunity to learn new things.
What should we do about it?
Education secretary Gavin Williamson recently suggested that schools should formally assess pupils to identify how far behind they are. Responding to a question from the chair of the Education Select Committee, Robert Halfon MP, about whether there should be “an urgent assessment or benchmarking made of all children in school, with the data collected by the Department for Education”, Mr Williamson said the idea was “something we’re looking at and will be doing”.
I think this is misguided. To help our pupils get back on track, we should, I think, eschew formal assessments.
Instead, we should make sure our pupils feel welcomed and safe; we should attend to safeguarding and mental health concerns. And then we should put quality teaching first: through good teaching, formative assessments can identify any academic concerns. If nothing else, I am doubtful that lots of formal assessments will be helpful and may add to the stresses of the lockdown.
Yes, there may need to be some form of “recovery curriculum”, but only in the sense of helping pupils to re-establish routines and fine motor skills, and to help them adjust to school life once more. But not in the sense of recovering lost learning. Rather, we should get back to teaching the curriculum and attempt to cover as much depth and breadth as we ordinarily hope to achieve.
In the article I referenced earlier, Prof Wiliam also says that the use of standardised tests is unlikely to be of much help: “Standardised tests can tell us how far along a continuum of achievement a student is, but knowing that a student is at the 30thpercentile of achievement for his year tells us nothing about what kinds of instruction we should provide.
“Worse, because many such standardised tests adjust the items a student is asked to answer according to how well the student answered previous items (sometimes called “adaptive tests”), we don’t even know which items the student answered correctly. All we can do is place the student somewhere along a line of increasing achievement.”
He suggests that, first, unless you want to be able to put next year’s test results in context by having data that show how little pupils remembered from the previous year, then standardised tests are not going to be of much help.
Second, he says, school leaders and teachers must decide whether material that has not been covered from last year needs to be covered: “While some authors have argued forcefully the desirability of starting students on this year’s curriculum, that aspiration must be tempered in the case of more hierarchical subjects like math. After all, if students cannot generate sequences of equivalent fractions, then they are unlikely to be able to master addition of fractions with any understanding.”
Third, he says, instead of relying on commercially produced tests, teachers would be better advised to use quick surveys of student achievement: “These sort of assessments could take various forms, from using single, well-designed multiple-choice questions to gauge a class’ recall of the prerequisites for the next lesson, to getting students to use finger-voting (one finger for A, two for B, and so on), to using the chat facility when teaching online. This will provide teachers with useful information about where to pitch their instruction (and also provides students with retrieval practice).”
In short, we should avoid talk of learning loss and help our pupils to adjust to school life, including by re-establishing good habits and study skills. Then we should get on with teaching the curriculum, using on-going formative assessments – which double as retrieval practice tasks – to ascertain what our pupils know and what they do not yet know.
As David Ausubel said in 1968: “If I had to reduce all of educational psychology to just one principle, I would say this: The most important single factor influencing learning is what the learner already knows. Ascertain this and teach him accordingly.