This article was written for SecEd magazine and first published in May 2018. You can read the original version on the SecEd website here.
You can access the full archive of my columns for SecEd here.
So far in this series, I’ve argued that our obsession with feedback as the panacea for pupil progress and a proxy for good teaching has led to some questionable, potentially damaging practices.
Dialogic marking and verbal feedback stamps, for example, are a drain on a teacher’s precious time and yet there is no evidence that they have any positive impact on pupils’ progress.
As such, I have been exploring ways of making marking and feedback more meaningful, manageable and motivating.
Before half-term, I explained that feedback thrives on error – that is to say, the difference between what we know and can do, and what we aim to know and do – and works best when it answers three key questions, namely: Where am I going? How am I going to get there? Where to next?
What’s more, feedback works best when it operates on four levels, namely: Task and product (e.g. does the answer meet the success criteria?), Process (e.g. what is the pupil’s understanding of the concepts and knowledge related to this task?), Self-regulation (e.g. how can the pupil reflect on their own learning?), Self (e.g. the importance of keeping praise and feedback about learning separate so as not to dilute performance information).
This week, we will focus on how to develop an effective school assessment policy.
Negating the side effects of feedback
Before we consider what should and should not be included in a school assessment policy, we should acknowledge an important caveat: the evidence of what works and does not work is inconclusive.
As I explained earlier in this series, Ofsted has said that it does not expect to see a particular frequency or quantity of work in pupils’ books or folders. Rather, the inspectorate recognises that the amount of work in books and folders will depend on the subject being studied and the age and ability of the pupils. What’s more, its inspectors have been told not to report on a teacher’s marking practice, or make judgements on it – because the evidence is not yet robust – other than to say whether or not it follows the school’s assessment policy.
This begs the question, if the evidence isn’t sufficient for Ofsted to make judgements on what’s right or wrong, how can schools dictate what’s right or wrong in their assessment policies?
According to the Educational Endowment Foundation (EEF), although many studies into feedback show very high effects on learning, there are also studies that show that feedback can have negative effects and make things worse ().
An assessment policy, therefore, needs to acknowledge that the evidence is inconclusive and that not all forms of marking and feedback are effective and worthwhile. Indeed, some feedback can have adverse effects on pupils’ learning and progress.
So what do we know? What can we say works best? Research suggests that, in order to avoid any negative effects and to be impactful, feedback should be specific, accurate and clear. For example, “it was good because you…” is better than just saying “correct”.
Feedback should also compare what a pupil is doing now with what they have done wrong before. For example, “I can see you were focused on improving X as it is much better than last time’s Y…”.
Furthermore, feedback should encourage and support the investment of additional effort and be given sparingly so that it is made meaningful. And feedback should provide specific guidance on how to improve rather than simply telling pupils when they are wrong.
Some of the research reviewed by the EEF suggests that feedback should be about complex or challenging tasks or goals as this is likely to emphasise the importance of effort and perseverance as well as be more valued by the pupils.
The EEF says that the quality of existing evidence focused specifically on written marking is low. Very few large-scale, robust studies, such as randomised controlled trials, have – they tell us – looked at marking. Most of the studies that have been carried out in this field are small in scale and/or from higher education or English as a foreign language (EFL) contexts, which make it difficult to translate their findings into a primary or secondary school environment.
Some findings do, however, emerge from the evidence that could help teachers and school leaders in their pursuit of an effective, sustainable and time-efficient marking policy. These include the following, which you may consider in your policy:
- Careless mistakes should be marked differently to errors resulting from misunderstanding. The latter may be best addressed by providing hints or questions which lead pupils to underlying principles; the former by simply marking the mistake as incorrect, without giving the right answer – therefore, our policies should make clear the difference between mistakes and errors and how these are addressed through feedback (I’ll explore this in more depth shortly).
- Awarding grades for every piece of work may reduce the impact of marking, particularly if pupils become preoccupied with grades at the expense of a consideration of teachers’ formative comments – therefore, our policies should specify that we do not expect every piece of work to be marked or for teachers to “tick and flick”.
- Pupils are unlikely to benefit from marking unless some time is set aside to enable pupils to consider and respond to marking – therefore, our policies should make clear that some lesson time needs to be set aside for pupils to respond to feedback and improve their work.
- The use of targets to make marking as specific and actionable as possible is likely to increase pupil progress – therefore, our policies should specify that feedback should, on occasion, be written as targets for improvement.
- Some forms of marking, including acknowledgement marking, are unlikely to enhance pupil progress – therefore, our policies should make clear that we expect teachers to mark less but mark better.
Error or mistake?
As I say above, our assessment policies should make a clear distinction between marking an error and marking a mistake. So what is the difference?
Most studies into the effectiveness of feedback make a distinction between a “mistake”, which is something a pupil can do and normally does do correctly but has not done correctly on one occasion (we may call it a lapse), and an “error” which is something a pupil cannot yet do because they have not yet mastered it or else they have misunderstood it.
When a pupil makes a mistake, research tells us that it should be marked as incorrect, but that the correct answer should not be provided. One study of undergraduates, for example, found that providing the correct answer was no more effective than not marking the work at all, because providing the correct answer means that pupils are not required to think about the mistakes they make or recall their existing knowledge. As a result, they were no less likely to repeat them in the future.
When a pupil makes an error – when they get something wrong as a result of an underlying misunderstanding or a lack of knowledge – research tells us that the most effective strategy is to remind pupils of a related rule (e.g. we start sentences with capital letters), or to provide a hint or ask a question that leads the pupil towards a correction of the underlying misunderstanding. Simply marking an error as incorrect (as we would if it were a mistake) is ineffective because pupils do not have the knowledge required to work out what they have done wrong and why.
To code or not to code?
Our assessment policies should strike a balance between providing feedback to pupils that helps them improve and protecting teachers’ work/life balances.
One way of reducing teacher workload is to use comment banks or marking codes. Common codes used in English, for example, are “sp” to indicate a spelling mistake, “p” to indicate missing punctuation, and “//” to indicate where to start a new paragraph.
Some schools use numbered or lettered codes and provide pupils with a key which they can refer back to in order to see what the mark means.
Research tells us that there is no difference in the impact of coded feedback versus full written feedback, so long as pupils understand what the codes mean, of course. Our policies should therefore permit, if not encourage, the use of time-saving strategies such as marking codes.
I have already denounced the school-wide policy of dialogic marking. I have argued that writing detailed comments in pupils’ exercise books to which they are expected to respond and the teacher is, in turn, expected to comment further, is time-consuming for teachers. I’ve also argued that there is little evidence it works in terms of leading to significant academic gains for pupils.
It is true that a US study which analysed 600 written feedback journals used in middle school literacy lessons concluded that the use of teacher questions in the feedback helped to clarify understanding and stretch pupils, and that a Dutch study found that engaging in dialogue led pupils to become more reflective about their work, but neither study was able to conclude that written feedback was more impactful than verbal feedback. There is, to my knowledge, no evidence that suggests written feedback is preferable.
Indeed, most of the evidence on effective feedback consistently finds that the specificity of feedback – that is to say, how detailed and focused the feedback is – is the key factor in determining its impact, not whether it is verbal or written.
In other words, providing clear success criteria for a piece of work leads to a better performance and setting clear targets for marking, and then reminding pupils of these before they complete a similar piece of work in the future, is also effective.
Our policies should therefore steer clear of mandating teachers to engage in specific types of assessment and feedback – such as dialogic marking – and focus instead on the specificity of that feedback and what is done with it afterwards.
Short-term or long-term?
Research tells us that short-term targets are more effective than longer-term goals. What’s more, pupils make better progress when they are only working towards a small number of goals at any given time. Our policies may, therefore, specify that feedback should include short-term goals and that pupils should not be given too many targets at any one time.
Targets for improvement are also more effective when they are co-constructed with – or constructed entirely by – pupils. Certainly involving pupils in the process of setting targets helps them to better understand those targets and take ownership of working towards them. If nothing more, at least pupils can phrase targets in a language that they understand.
Follow me on Twitter: @mj_bromley