E-Learning at Universities: Does Starting with Difficult Questions Affect Student Performance?

A new study co-authored by Jan Marcus and Thomas Siedler, members of the Berlin School of Economics, together with Agata Galkiewicz, investigates an important question about online university assessments: does the order of questions impact student performance? As online exams became more widespread during the COVID-19 pandemic, many universities adopted randomized question orders to reduce the potential for cheating. However, this strategy raised concerns about whether the order of questions—especially starting with more difficult ones—might negatively affect students' performance by causing stress or disrupting their time management.

The researchers analyzed over 8,000 online assessments from economics and statistics courses at two of Germany's largest universities. These assessments, administered during and after the pandemic, randomized question sequences to ensure fairness. By examining whether students who started with difficult questions performed differently than those who began with easier ones, the study aimed to uncover the relationship between test design and student outcomes.

The findings revealed no evidence that starting with a difficult question impacts overall performance. This "zero effect" holds even when students face two difficult questions at the beginning of the assessment. Furthermore, the results show that this effect is consistent across both universities and remained stable during and after the pandemic. Importantly, the data also showed no significant impact on specific subgroups of students or the distribution of performance outcomes, supporting the robustness and generalizability of the results.

These findings carry important policy implications for universities and educators. Randomizing question orders proves to be an effective and low-cost strategy to enhance exam security without compromising students’ outcomes. The study reassures institutions that this approach maintains both academic integrity and fairness, making it a reliable tool for online assessment strategies. Educators can focus their efforts on improving the content and design of assessments rather than worrying about the order in which questions are presented.

To read the full paper: https://opus4.kobv.de/opus4-hsog/frontdoor/deliver/index/docId/5657/file/BSoE_DP_0055.pdf

 

About the authors

Agata Galkiewicz is a researcher at the Institute for Employment Research (IAB) in Nuremberg and a Ph.D. student at the Chair of Economics, in particular Economic Policy, at the University of Potsdam. Her research focuses on applied microeconometrics, labor economics, and the economics of education. She holds a Master's degree in Economics from the University of Mannheim and has contributed to studies on education policies and labor market outcomes.

Jan Marcus is a Professor for Applied Statistics at Freie Universität Berlin, a faculty member of the Berlin School of Economics, and affiliated with the Institute of Labor Economics (IZA). His research focuses on the economics of education, health, and labor, with a strong emphasis on policy evaluation using advanced econometric methods.

Thomas Siedler is a Professor of Economics at the University of Potsdam, a faculty member of the Berlin School of Economics, and affiliated with the Institute of Labor Economics (IZA). His research focuses on labor economics, health economics, and family economics, using applied microeconometric methods.