
Students preparing for IB Physics SL right now are, in many cases, revising the wrong content, drilling the wrong exam conditions, and measuring their scores against the wrong benchmarks—because the resources they’re using were built for a course that no longer exists. The syllabus redesigned for first assessment in 2025 replaced the old core-plus-options structure with a unified five-theme model that every student follows. It confirmed calculator permission across all examination papers and updated the data booklet to match.
None of that is marginal. Resources organized around option-based coverage assume a different content scope. Materials built for partial non-calculator conditions train the wrong technique. Historical cut-scores anchor to a different paper structure entirely.
Running a quick alignment check against three things will tell you whether a resource belongs in your preparation or not:
- Course model: aligned resources organize content around the current five-theme structure, or at minimum do not assume an option-based coverage model. If you see planning built around choosing an option, treat it as legacy.
- Data booklet: aligned resources explicitly reference the updated post-redesign data booklet, not an undated or older version.
- Pre-2025 past papers: only keep a resource in regular rotation if you can state in one sentence that you’re using those papers for physics thinking and method communication—not for paper structure, timing strategy, or boundary targets. If you can’t make that separation cleanly, treat the resource as risky.
Running this screen is fast. What you do with the answer—which topics you weight, which you stop spending time on—is where the real preparation decisions begin.
Focus Preparation — Structural Shifts as Priority Signals
Without the option model, the old habit of quietly back-loading certain topics no longer works. The five-theme structure in the current subject brief puts all examined content in a shared core, and every student is expected to handle questions drawn from anywhere in it. The preparation question shifts from which areas can be safely ignored to how consistently you can perform across the full range.
That reframing makes reliability a more productive lever than avoidance. Anything previously excused as ‘just part of my option’ is now mainstream, which means weak spots surface more often and more visibly. For a student targeting around a 6 with limited preparation time, the highest-yield move is usually to build depth in areas where performance is patchy—revisiting questions you partially understand but don’t yet answer cleanly—rather than doubling down on territory you’ve already secured.
Triage starts with the alignment screen: confirm you’re practicing in the right format before you optimize a timetable. Then focus on two categories. First, concepts you repeatedly miss in mixed-theme practice. Second, cross-cutting skills that appear across every theme—setting up equations from a description, maintaining unit consistency, linking qualitative reasoning to a calculation. Knowing what to cover and in what order is necessary, but it says nothing yet about what to do once you’re in a calculator-permitted paper and need to show work that actually earns marks.

The Calculator Change — Technique Adjustments That Actually Matter
When arithmetic is no longer the obstacle in an exam, method communication becomes the differentiator. The Physics SL subject brief for first assessment 2025 confirms calculators are permitted across all papers—which shifts the real challenge from getting the number right to demonstrating the reasoning behind it. That distinction matters more than it first sounds. Mocks and timed practice should run under calculator-permitted conditions using the current data booklet, with attention on how clearly the route to an answer is communicated, not on mental arithmetic drills that no longer reflect what the exam asks.
A practical standard for protecting method marks without overwriting every solution: for calculation questions, write the governing relation or law you’re using, show one substitution line where symbols become numbers including units, and close with a rounded answer with units. That level of working is usually enough for a marker to follow your reasoning even if the final value is slightly off. A bare final number, or a chain of calculator outputs with no stated formula, gives little to award. Calculator access has effectively removed arithmetic friction, so the differentiator is now almost entirely setup choice and how cleanly that choice is communicated on the page. What those marks convert to in grade terms is harder to pin down when the benchmarks anchoring that conversion were built for a different course structure.
Grade Targets, Boundary Realities, and Practice Volume
Targeting a 5, 6, or 7 under the redesigned course is less about finding a magic cut-score than about how consistently you produce clean work across the current paper structure. An IBO research summary on statistical grade boundary setting explains that boundaries are harder to anchor when a curriculum or assessment model has just changed, because the historical data used to stabilize them no longer applies in the same way. Pre-2025 cut-scores are, at best, loose context for new cohorts rather than reliable targets. Using them to estimate a current grade is a bit like using last year’s timetable to catch today’s train: earnest, but not especially accurate. The May 2025 Diploma Programme (DP) and Career-related Programme (CP) statistical bulletin provides the first full-cycle distribution picture under the new model, but it functions as system-level context, not a personal conversion chart from percentage to grade.
Under that uncertainty, the most useful way to read mock performance is as evidence of stability and error patterns rather than firm grade predictions. After each timed set or paper, note three things: which format you used (new-syllabus aligned or legacy), your score as a percentage of available marks, and the top two ways you tended to lose marks, such as setup and units, concept selection, or method not shown. Review these weekly, not after every sitting. If your percentage is rising while the same loss types keep appearing, you’ve mainly gained speed and comfort, not reliability. Targeted drills on those specific weaknesses will do more than another full paper. If your percentage is flat but your errors are narrowing into fewer, clearer categories, your performance is stabilizing. That’s the point at which increasing full-paper frequency under realistic conditions starts to pay off.
How you sequence practice follows logically from how you’re reading those results. Earlier in the preparation cycle, short topic-aligned questions do more work: they force repeated practice on setup, units, and data-booklet use without the time overhead of a full paper. As errors concentrate and performance stabilizes, full papers under current-format conditions become a more useful test, not for boundary-chasing, but for checking endurance, timing, and consistency across the range of themes. Grade distributions and bulletin data give useful landscape context, but what should drive daily decisions is how your performance trends on genuinely aligned tasks, not an attempt to reverse-engineer a cut-score from a previous session.
Staying Aligned and Using Mocks Effectively Under the New Syllabus
The redesign didn’t just change what’s on the syllabus. It changed how you should read every preparation signal: which topics matter, which conditions to train under, which technique to prioritize, and what a mock score actually tells you.
Students who handle this course well aren’t the ones who bolted new material onto a pre-2025 approach. They’re the ones who stopped treating the old structure as a baseline entirely and started working directly inside the new one, measuring themselves against current-format tasks rather than inherited benchmarks.
When the exam arrives, none of that old scaffolding is in the room. There’s just the paper, and whatever you actually trained for.