Guides · evidence & practice

After a bad mock: error taxonomy, spaced returns, and stopping the shame spiral

~2 min read · Last updated 1 April 2026

Download PDFPrinter-friendly page

PDF downloads directly. Use the printer page if you prefer your browser's print or "Save as PDF" dialog.

Thesis: a mock exam is a measurement event that can also be a learning event if you treat it as structured feedback. Retrieval practice strengthens memory relative to restudy (Roediger & Karpicke, 2006), and distributed practice improves retention versus massing (Cepeda et al., 2006). A bad mock therefore implies a map: which failure modes repeated, which topics need sooner returns, and which conditions (time, stress, format) differ from your usual practice. Shame is not in the syllabus; repair is.

1. Separate knowledge gaps from exam-craft gaps

Sort errors: blank retrieval, misread stem, method slip, time overrun, presentation. Each class needs a different next session. Drilling definitions does not fix time allocation; timing drills do not fix conceptual blanks. Taxonomy first — otherwise you “revise harder” in the wrong dimension.

2. Repeat the smallest unit that failed until it changes under time

This is the same wound-first logic as honest past-paper practice: full papers are for integration; micro-drills are for repair. Retrieval attempts with feedback are the lever (Karpicke & Roediger, 2008).

3. Space returns — sooner for fragile, wider for stable

Building Offload for weeks like this

If you want early access or launch updates, reserve a spot — no spam.

Reserve my spot

You cannot optimise every interval by hand; you can obey directionality from spacing research: fragile knowledge needs shorter gaps initially; stabilised knowledge can tolerate longer gaps (Cepeda et al., 2006). A simple rule: calendar the next return before you feel fully fluent on the weak topic.

4. One full mock only after micro-skills stop collapsing

  • If timed sections still disintegrate, another full paper mostly remeasures the same wound.
  • Re-enter full papers when single-type performance holds under clock pressure.

What Offload aims to do (without implementation detail)

We want weak signals from attempts — mocks, questions, cards — to feed a schedule you can trust: what returns when, and how the week moves when something slips. We are not publishing scoring or scheduling internals here; the intent is to align product behaviour with retrieval and spacing evidence instead of leaving students to rebuild the plan from guilt.

References

  1. Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249–255.
  2. Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966–968.
  3. Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354–380.

Offload beta / waitlist

Reserve