This article was written for SecEd magazine and first published in June 2018. You can read the original version on the SecEd website here.
You can access the full archive of my columns for SecEd here.
I will begin this final instalment by exploring two important considerations related to feedback: the when and the how.
The timing of feedback is important because if it is given too early, certainly before pupils have had a chance to work on a problem for themselves, then they will learn less.
If it is given too late, pupils will have moved on to new learning and the feedback will be irrelevant, or they will have repeated the same mistakes and the feedback will not be as impactful as it would have been had it been given in a timely manner.
According to Professor Dylan Wiliam, feedback after a test that includes the correct answer increases pupils’ capacity to learn because it enables them to correct any errors in their work. The critical mechanism in learning from tests, Wiliam argues, is successful retrieval. However, if pupils do not retrieve the correct response after taking the test and have no recourse to learn it, then the benefits of testing can be limited or indeed absent altogether.
As such, providing feedback after a retrieval attempt, regardless of whether the attempt was successful or unsuccessful, will help to ensure that retrieval is successful in the future.
Conventional wisdom – supported by studies in behavioural psychology – suggests that providing immediate feedback is best. However, recent experimental results have shown that delaying feedback might actually be more powerful.
In one study, for example, pupils read a text and then either took or did not take a multiple-choice quiz. One group of pupils who took the quiz received correct answer feedback immediately after making each response (immediate feedback); another group who took the quiz received the correct answers for all the questions after completing the entire test (delayed feedback).
One week after the initial session, pupils took a final test in which they had to produce a response to the question that had formed the stem of the multiple-choice question (in other words, they had to produce an answer of their own rather than selecting one from a list of options). The final test consisted of the same questions from the initial multiple-choice quiz and comparable questions that had not been tested.
The study found that taking an initial quiz (even without feedback) tripled final recall relative to only studying the material. When correct answer feedback was given immediately after each question in the initial quiz, performance increased by another 10 per cent. However, when the feedback was given after the entire test had been completed, it boosted final performance even more. In short, the study concluded that delayed feedback led to better retention than immediate feedback.
Although giving the answers to questions straight after a test is still relatively immediate feedback, the benefits of delayed feedback might represent a type of spacing effect.
Ultimately, what matters most when considering when to give feedback is the mindfulness with which pupils will engage with it and to remember that sometimes less is more. Feedback is best given, therefore, just before pupils have the time to act upon it in class.
When considering whether to give verbal or written feedback, there is very little research on their relative merits. However, Boulet, Simard and De Melo sought to answer this question in 1990 when they studied 80 Canadian pupils. They divided the pupils into three groups: one group was given written feedback, a list of weaknesses and a work plan; the second group was given verbal feedback on the nature of their errors plus a chance to work on improvement in class; and the third group was given no feedback.
At the beginning of the study there were no differences in achievement. All three groups fell short of the 80 per cent mastery set for the task. At the end, all groups still fell short but the second group scored significantly better.
The conclusion to be drawn, therefore, is that whether feedback is given verbally or in writing matters far less than giving pupils time to use the feedback in class to improve their work.
However, as I have explained, praise is best given verbally and specific formative guidance is best written so that it can be referred to while pupils respond to it and redraft their work.
Putting it into practice
In part 5 of this series (see link below) I shared some thoughts on what should and should not be included in a school’s assessment policy. But, once written, how should a policy be translated into practice? In other words, how can we ensure our good intentions lead to genuine improvements both in terms of teacher workload and pupil progress?
If we don’t consider a policy’s implementation, there is a real danger it will forever remain an unread document on a dusty shelf.
In February, the Educational Endowment Foundation (EEF) published a school leaders’ guide to implementation called Putting Evidence to Work.
In the foreword, chief executive Kevan Collins said: “Schools today are in a better position to judge what is most likely to work in their classrooms than they were 10 years ago. We have access to more robust evidence about which teaching and learning strategies have been shown to be effective – and, as the evidence base has grown, so too has teachers’ appetites for it.”
However, he also cautioned: “Generating evidence can only take us so far. Ultimately, it doesn’t matter how great an educational idea or intervention is on paper; what really matters is how it manifests itself in the day-to-day lived reality of schools.”
In short, it doesn’t matter what the evidence tells us about the positive impact of feedback on educational outcomes if we implement it badly – and we know that, although there is some strong evidence about the positive effects of feedback, there is also some evidence pointing to the negative effects.
Yes, feedback works but, as with all teaching strategies, it only works when it is done well. And by “well” I mean when it is – to paraphrase the 2016 report by the Workload Challenge Working Group – meaningful, manageable and motivational.
We can only achieve these three aims when we ensure that marking and feedback are not burdensome for the teacher or pupils, and are focused on closing the feedback loop.
The implementation process the EEF suggests is as follows. Stages one and two are concerned with building solid foundations.
Stage one is to treat implementation as a process, not an event; and to plan and execute it in stages. The EEF suggests schools allow enough time for effective implementation, particularly in the preparation stage and that they prioritise appropriately.
Stage two is to create a leadership environment and school climate that is conducive to good implementation. The EEF suggests that schools: set the stage for implementation through school policies, routines and practices; identify and cultivate leaders of implementation throughout the school; and build leadership capacity through implementation teams.
Stage three is termed “explore”, and involves defining the problem the school wants to solve, and identifying appropriate programmes or practices to implement.
Stage four is termed “prepare” and involves creating a clear implementation plan, judging the readiness of the school to deliver that plan and preparing staff and resources.
Stage five is “deliver”. Here, the leaders need to support staff, monitor processes, solve problems and adapt strategies. The final stage is “sustain” and involves making plans to ensure changes are sustained and scaled up.
A process not an event
The key take-away message from this report, then, is that improving our assessment policies and practices is a long-term process not a one-off event. Although it might be tempting to announce to staff tomorrow morning that we are abandoning dialogic marking and introducing a simplified approach celebrating teacher autonomy, by so doing we are in danger of replacing one unworkable system with another, or sowing uncertainty and inconsistency.
One of the examples provided in the EEF implementation report is flash marking and it is worth considering here. Flash marking is the use of codes in the form of success criteria. The first stage to implementing this low-energy/high-impact marking and feedback strategy, the EEF argues, is to identify the problem…
Teachers, they say, spend too much time on ineffective feedback. This has a negative effect on their workload. It also leads to undesirable pupil behaviours. For example, ineffective self and peer-assessment, feedback not developing pupil metacognition, a lack of pupil engagement with feedback, and feedback demotivating some pupils. It can also have a negative impact on attainment with pupils making less than expected progress.
The next stage is to identify the “active ingredients” of the intervention. For flash marking, the EEF recommends removing grades from day-to-day feedback. Then they recommend using codes within lessons in order to provide feedback that is skill-specific. The feedback codes are given as success criteria and used to analyse model answers.
They then recommend that feedback is personalised and used to identify individual areas for development, and that flash marking codes are used to inform future planning/intervention. Fourth they recommend that targets for improvement are addressed in future work that focus on a similar skill, identified by a flash marking code. Pupils will justify where they have met their previous targets by highlighting their work. Skill areas can be interleaved throughout the year to allow pupils to develop their metacognitive skills.
The third stage of implementation is to put intervention strategies into play. This might involve training. The EEF recommends three training sessions over two years, attended by two staff (including the subject leader). Training can then be cascaded to other members of staff.
The first training session acts as an introduction to the theory and principles of flash marking, focusing on how to embed the codes into existing practice. The second training session is for the moderation of work and may involve the use of demonstration videos showing how to use flash marking to develop metacognitive skills and inform curriculum planning. The third training session is a refresher for any new members of staff and an opportunity to share good practice.
This third stage includes the development of educational materials. This might involve online portal access used to share training resources and demonstration videos. It might involve webinars.
Throughout this third stage of implementation, there needs to be on-going monitoring. This might involve the periodic moderation of work via the web portal. And there may need to be on-going coaching, too, and other forms of support including observations, team-teaching and co-planning. The fourth stage is a focus on implementation outcomes.
In this seven-part series I’ve argued against one-size-fits-all assessment policies that mandate teachers to assess pupils at set times and in set ways because these may not be appropriate to the task, the pupil, the teacher, the subject or the phase.
I have also argued against burdensome assessment practices such as dialogic marking and the use of verbal feedback stamps which steal a lot of teachers’ time in return for very little (if any) impact on pupil progress.
I have explored ways of making marking more meaningful, manageable and motivating and explained how to make feedback fair, honest, ambitious, appropriate, wide-ranging and consistent. I have also suggested that feedback should be given sparingly and distinguish between errors and mistakes, and constitute a small number of short-term targets.
I’ve said that if we are to encourage our pupils to engage with assessment feedback and respond to it, we must ensure that our feedback is focused, positive, simple, timely and personal. We must also make effective use of self and peer-assessment activities.
I’ve explained that what matters most when considering when to give feedback is the mindfulness with which pupils will engage with it and to remember that sometimes less is more. Feedback is best given, therefore, just before pupils have the time to act upon it in class. And whether feedback is given verbally or in writing matters far less than giving pupils time to use the feedback in class to improve their work.
So what can we take from this exploration of assessment and feedback? Simply this: context is all and pragmatism is essential. What works is what works and the best person to decide on this is the teacher. Assessment policies, therefore, need to allow flexibility and autonomy. Dictating when and how feedback should be given can lead to unmanageable levels of teacher workload and be counter-productive to pupils’ progress.
Follow me on Twitter: @mj_bromley