I feel like there are some ideas in Fintech that haven't quite leaked over into the gaming space yet.
Has anyone investigated the idea of sacrificing an entire high priority thread to a timer loop that busy waits over a collection of pending timers? Most gaming PCs these days have more than enough cores to get away with the idea of a full-time high-precision timer thread.
In my experiments of this idea, I have had no trouble getting jitter under 100uS in managed languages (C#/Java). Millions of timers scheduled seems to be no problem, assuming mild optimizations are employed (i.e. order by next desired execution tick). With all of this magic turned on, I still have 31 other cores to play with. Task.Delay looks like a preschool art project by comparison to the precision I get out of my (arguably dumb-as-hell) DIY timer impl.
I have used this approach to schedule client frame draws in a custom UI framework, and it is indistinguishable from butter on my computer and across reasonable networks. In my prototypes, the client input events are entirely decoupled from the frame draws by way of a separate ringbuffer/batching abstraction. There is actually no locking in my architecture. Draws back to the client use immutable snapshots of state, so current updates can continue unfettered. The state is only ever mutated from a single thread.
Technically, I sacrifice ~2 threads in my architecture, as the ringbuffer also uses a busy wait technique.
All of this said, if you are willing to burn more power, use more cores, and think outside of the box just a bit, you can do some pretty amazing shit.
Consider these ideas at broader scale too - What if you could amortize the cost of that power-hungry timer thread for thousands of gamers instead of just 1? Are there maybe some additional benefits to moving 100% of the game state to the server and shipping x265 frames to the end users?
Written during one of these blizzard like I have only seen in Toronto.
This makes me want a French Vanilla from Tim Hortons.
> The joy of "discovering" this approach was short lived: This system is very well known in the game industry and is actually the keystone of games such as Starcraft II or id Software engines !
Is it very well known in the game industry? I have trouble finding anything about it.
Related: If anyone is interested, I've been working on an ECS based mini-game framework in Rust [0] that runs with this idea. It has a 1200 hz simulation clock, 60 hz rendering clock, and 150 hz physics integrator (Euler). I've also posted some example mini-games that use it [1].
[0] https://github.com/Syn-Nine/mgfw
[1] https://github.com/Syn-Nine/rust-mini-games/tree/main/2d-gam...
Regarding "missing" a collision due to the delta-time being too large, I thought the game industry standard is to "backtrack" (in math, not in ticks) to determine the correct collision point and go from there. Granted, that is a bit more complicated than just keeping the delta-t/tick consistently small, but it is more correct.
on my simple games I essentially do this which seems to work but I have concerns about it cascading out of control if a frame takes too long. any tips for dealing with that? so far I just aggressively profile to make sure my game logic has a good time buffer before it hits my timestep.
const TIMESTEP = 1/60;
let since_last_frame = 0.0;
function update(dt) {
since_last_frame += dt;
while (since_last_frame > TIMESTEP) {
tick();
since_last_frame -= TIMESTEP;
}
}
function tick() {
// handle game logic at a fixed timestep
}
I’m not a gamedev at all so I never really dug into it, but the issue I’ve always had there was how to handle input when executing multiple simulation steps within a single “execution frame”.
So is the the length of this slice equivalent the infamous "tick rate" that gamers often complain about?
The other problem with naively measuring the frame duration is that you'll get sub-millisecond-jitter because modern operating systems are neiter "hard realtime" nor "soft realtime", which in turn introduces micro-stutter because your game frames will be timed slightly differently than when your new frame shows up on screen (because of the fixed display refresh rate - unless of course a variable refresh rate is used like G-Sync).
This is also my main pet peeve on the web. You can't measure a precise frame duration (made much worse by Spectre/Meltdown mitigations), but you also can't query the display refresh rate.
In the 80's we took perfectly smooth scrolling and animations for granted, because most 80's home computers and game consoles were proper hard-realtime systems. Counter-intuitively this is harder to achieve on modern PCs that are many thousand times faster.