Media Briefings

Can More Detailed Information On Student Learning Help Teachers Improve Test Scores?

  • Published Date: August 2010

Providing teachers with ‘low-stakes’ feedback on their students’ performance and conducting low-stakes classroom monitoring is not enough – on its own – to improve test scores. That is the central finding of new experimental research in the Indian state of Andhra Pradesh by Karthik Muralidharan and Venkatesh Sundararaman, published in the August 2010 Economic Journal.

High-stakes tests – where teachers and schools are rewarded or sanctioned based on student performance – are an increasingly common feature of school systems worldwide. But they are also controversial because of the belief that they can induce distortions in teacher behaviour, such as ‘teaching to the test’.

Low-stakes tests, on the other hand, focus on providing detailed data on student performance to teachers to help them understand areas of student weakness, set goals, focus efforts better and modify teaching practices. In theory, low-stakes testing can improve teachers’ intrinsic motivation, without the negative side-effects of high-stakes testing.

Until now, there has been very little good empirical evidence on the effectiveness of low-stakes programmes. In their Economic Journal paper, Muralidharan and Sundararaman study whether low-stakes ‘diagnostic feedback’ combined with low-stakes monitoring of classrooms can improve student learning outcomes.

Their study was conducted during a full school year across 200 rural primary schools in the Indian state of Andhra Pradesh:

  • One hundred (100) of these schools were selected by a random lottery to receive externally administered learning assessments at the beginning of the school year followed by detailed feedback reports on student performance, strengths and weaknesses at the start of the school year (‘feedback schools’). These schools were also subject to random, unannounced visits throughout the year, during which monitors observed teaching processes and activity.
  • The 100 schools not selected in the lottery (‘control schools’) received no external assessment, no feedback reports and only one unannounced monitoring visit.
  • Students in all 200 schools sat tests – in mathematics and language – at the end of the school year to measure student learning levels.

The study has two main findings:

  • First, teachers in feedback schools performed better on various measures of teaching activity while being observed. In particular, compared with teachers in control schools, these teachers were teaching actively more often, asking more questions of students, reading more from textbooks, using blackboards more and assigning more homework.
  • Second, however, there was virtually no difference in test scores between students in the feedback schools and those in non-feedback schools at the end of the year.
  • Taken together, these two findings indicate that teachers worked harder while being observed, but did not use the feedback reports effectively in their teaching. Therefore, the study suggests that providing teachers with low-stakes feedback on student performance and conducting low-stakes classroom monitoring is not enough – on its own – to improve student learning.

But these results do not imply that diagnostic feedback cannot be useful in improving student learning. Indeed, nearly 90% of teachers perceived the feedback reports to contain useful information. One potential lesson for policy-makers is that perhaps teachers need more detailed guidance/support on how exactly to use the information in feedback reports to modify teaching practices.

Another potential lesson is that teachers might be more likely to make effective use of feedback when combined with additional extrinsic incentives. These could be positive (such as monetary rewards for improving student performance) or negative (such as sanctions for poor student performance), as are being used in the United States under the No Child Left Behind Act.

The experiment reported in the Economic Journal provided no positive incentives nor were there any negative consequences for poor student outcomes. But in a parallel study in the same state at the same time, another 100 schools received both low-stakes feedback and monitoring, as well as monetary incentives for improving student performance.

The results for this set of schools indicate that this combination did, in fact, have a significant positive impact on student learning.


Notes for editors: ‘The Impact of Diagnostic Feedback to Teachers on Student Learning: Experimental Evidence from India’ by Karthik Muralidharan and Venkatesh Sundararaman is published in the August 2010 issue of the Economic Journal.

Karthik Muralidharan is at the Department of Economics, University of California, San Diego. Venkatesh Sundararaman is at the South Asia Human Development Unit, World Bank.

For further information: contact Karthik Muralidharan on +91-96-7683-1222 or +1-617-501-2459 (email:; or Romesh Vaitilingam on +44-7768-661095 (email: