Low frame rate in VR causes sickness because it increases the gap between when your head moves and when the display confirms that movement — and the brain doesn't treat that gap as a technical inconvenience. It treats it as evidence that something is wrong with its sensory systems.
The Brain's Prediction Loop
The brain doesn't wait passively for sensory information to arrive. It continuously generates forward models — predictions about what each sensory channel should report based on current movement and context. When you turn your head left, the brain predicts: visual field shifts left, vestibular system detects rotation, neck proprioception confirms the turn. These predictions are generated in a roughly 20–40 millisecond window before the actual sensory data arrives.
In physical reality, this prediction-confirmation loop runs without friction. Light reflects off surfaces and reaches your eyes in effectively zero time. The confirmation arrives within the brain's prediction window, the forward model validates successfully, and the brain logs "movement occurred as expected."
VR motion sickness introduces a constraint that doesn't exist in the physical world: the display has to track your head position, render the appropriate view, and push the updated image to the screen before your eyes receive visual confirmation. Frame rate determines how frequently that update can happen. At 90fps, a new frame appears every 11 milliseconds. At 45fps, a new frame appears every 22 milliseconds.
When frame time exceeds the brain's prediction window — roughly 20ms for most people — the vestibular system and proprioception have already reported "head turned left," but the visual confirmation hasn't arrived. The brain cannot distinguish this technical delay from a genuine sensory malfunction. It processes both identically: as a prediction error. Accumulated prediction errors trigger VR sensory conflict responses, which is why the same mechanism that causes nausea in boats and cars applies here.
Why 90fps Became the Baseline
The VR industry converged on 90fps as a comfort target because it places frame time at 11 milliseconds — well inside most people's 20ms prediction window. At 90fps, visual confirmation arrives before the brain's forward model expires.
This isn't about visual smoothness in the aesthetic sense. A 2024 study on frame rate and simulator sickness found that 120fps represented a further significant threshold: participants reported meaningfully lower sickness scores at 120fps compared to 90fps, suggesting the 90fps standard still leaves margin for improvement for users whose prediction windows are narrower than average. Higher refresh rates reduce the baseline frequency of prediction errors — more frequent sampling means less time spent in the temporal gap between head movement and visual confirmation.
Why Frame Drops During Head Rotation Are Worse
Not all frame drops are equal. A drop from 90fps to 60fps during a static scene — when you're looking at something without moving — is relatively benign. The prediction-confirmation loop isn't active at high frequency because head velocity is low or zero.
Frame drops during active head rotation are a different problem. When you're turning your head at speed, the brain's forward model generates rapid sequential predictions. If frame time doubles mid-rotation, a prediction that expected visual confirmation in 11ms now waits 22ms. The vestibular system is mid-report when the timing breaks.
This explains why users reliably report that "frame drops during movement are much worse than low fps in general." The rotational acceleration of a head turn is precisely the kind of input the vestibular system evolved to detect and report with high precision. When the visual system fails to confirm the vestibular report on the expected timeline during that high-confidence event, the mismatch is acute.
The same principle extends to any high-velocity content: racing games, aerial movement, fast-paced combat. The baseline conflict level is already elevated; frame drops add temporal mismatch on top.
Motion-to-Photon Latency: The Full Chain
Frame rate is only one component of total delay. Motion-to-photon latency covers the complete chain from physical head movement to photons reaching your eyes: sensor detection, tracking processing, frame rendering, and display response time. All of these add up.
A headset running at 90fps with 15ms of tracking overhead produces roughly 26ms total latency — already exceeding the comfortable 20ms threshold for many users despite the nominally "high" frame rate. Display technology contributes too: early LCD panels required 5–8ms for pixels to change state; OLED panels reduced this to 2–3ms. Combined with tracking and rendering delays, first-generation headsets often exceeded 40ms total even at 90fps. Modern systems have pushed below 20ms, which accounts for much of the comfort improvement across hardware generations — not just better frame rates, but reductions across the whole latency chain.
Reprojection: A Band-Aid With Limits
When rendering demands exceed what the hardware can deliver at target frame rate, modern VR systems fall back on reprojection — called Asynchronous SpaceWarp (ASW) or Motion Smoothing depending on the platform. The technique takes the most recently rendered frame, warps it mathematically to compensate for head movement that occurred since rendering, and displays the adjusted frame instead of a dropped one.
Reprojection prevents the display from freezing and reduces the discontinuity of missed frames — for many users in many scenarios, it's preferable to the alternative. But it's not the same as rendering a new frame. The reprojected image is derived from content rendered 11–22ms earlier; the brain's prediction-confirmation loop receives an approximately correct but not precisely correct update. Many users report a distinct "something is wrong" quality when reprojection engages — ghosting on moving objects, visual smearing during rotation. That perception is accurate: reprojected motion is synthetic, and the sensory system detects the difference.
The implication for VR games with motion sickness: consistent native frame rates generally cause less sickness than higher average rates propped up by heavy reprojection. Consistency beats headline performance.
Why Inconsistent Frame Rate Is Worse Than Consistently Low
A steady 72fps allows the brain's prediction system to adapt to a consistent 13.9ms confirmation delay. The forward model gradually accounts for the recurring lag — not perfect adaptation, but manageable for low-velocity content.
Sudden drops from 90fps to 45fps break this. The brain's model expected 11ms delays and received 22ms. The prediction system has been calibrated to a pattern that is now violated. The mismatch is acute rather than chronic.
Variable frame rates between 60–90fps create the most difficult conditions, because there's no stable pattern to adapt to. Why VR frame rate matters for comfort traces back to this: the brain tolerates consistent timing much better than unpredictable timing, even when consistent timing is slower.
Temporal Reliability Beats Visual Fidelity
Frame rate in VR isn't primarily about visual smoothness. It's about how frequently the brain receives confirmation that its movement predictions were accurate. When frame time exceeds the 20ms prediction window, the brain's motion-detection system detects the gap, interprets it as sensory conflict, and responds accordingly.
This is why reducing graphics quality to maintain consistent 90fps reliably reduces sickness: the brain tolerates lower texture resolution far more easily than temporal inconsistency. A simplified environment at steady 90fps provides more reliable sensory confirmation than a detailed environment with variable frame times. Frame drops during head rotation are disproportionately bad because they break the prediction loop at exactly the moment the vestibular system is generating its most confident, high-precision reports.



