Benchmarking has long had a problem of ensuring numbers relate back to the reality of what players feel when gaming. The issue mostly comes to explaining precisely why stutters and hitching are happening in games, not just that they exist. For the most part, “frametime” testing (or more accurately, frame-to-frame interval testing) has helped to tackle this. Tom Petersen, Scott Wasson, and Ryan Shrout furthered that discussion over a decade ago, and from that, the industry moved toward the eventual bar chart adaptation with 1% and 0.1% lows (with occasional frame interval plots to explore major problems). But under the surface, these game stutters have sometimes been misattributed to frametime pacing issues rather than the actual problem, which was animation error, aka simulation time error. Think of this video like a research / whitepaper piece that presents some experiments and possible representations for animation error in benchmarks. This explains 1% lows, 0.1% lows, average FPS, “frametimes,” and animation error in-depth. We will begin adopting these new charts in our benchmarks.
It took a bit to get through the whole thing, but this looks like the most exciting development in the niche field of hardware gaming reviews since 1% lows became a thing (RIP Tech Report)