The One Man MMO Project
The story of a lone developer's quest to build an online world :: MMO programming, design, and industry commentary
Mitigating Floating Point Inaccuracy Rendering Large Environments
By Robert Basler on 2013-01-20 02:09:15
Homepage: www.onemanmmo.com email:one at onemanmmo dot com

My game's environment is 300km by 300km. In real world terms, that isn't all that big, but for a game world, it is sizeable. Once a rendered scene becomes larger than a couple of kilometres in size, you start running into severe problems with floating point accuracy. Even with my test terrain which is only 20km by 20km, I had very noticeable problems with props on the terrain appearing and disappearing as the camera moved. The graphics card thought the terrain was alternatingly in front of, or behind, the prop.

Luckily there is a fairly simple to implement workaround for this issue.

I give each entity in the world a set of integer, rather than floating point, X/Y/Z coordinates. With 32-bit coordinate values I get centimetre accuracy (which is fine given that a metre is about a quarter inch across on the screen) within a cube that is 40,000km on each side. And there's no loss of accuracy throughout the full range of the coordinates.

OpenGL still expects floating point coordinates. Given that the integer coordinates cover a huge range, they can't just be converted straight to floating point. Instead, there's a simple trick. As the camera moves away from the origin, I move the origin closer to the camera by increasing an integer offset which is subtracted from the integer coordinates of each renderable item, as well as the integer coordinates of the camera. By subtracting the same amount from the coordinates of the camera, as well as from the coordinates of everything that needs to be rendered, their relative positions stay constant while their absolute positions remain close to the origin where floating point accuracy can be maintained.

Something you'll notice if you adjust the offset coordinates without moving the camera is that the scene will jitter slightly, this is again due to floating point inaccuracy. To mitigate that, I only change the offsets once each kilometre, so we are always between 1km and 2km from the origin at any time. Also, since the camera is always moving when the change happens, there is no chance that anyone will notice this minor imperfection.

Another little gotcha is that since the coordinates are unsigned, there is the possibility that renderable items will end up on the wrong side of the origin. I clamp the coordinates during the subraction so renderables on the wrong side of the origin are pushed into range. I don't simply let the coordinates wrap because there's no predicting what results you would get passing huge, highly inaccurate floats to the renderer. The only danger with this solution is that you might see these pushed renderables in the wrong position. Since my largest renderable is 160m across, I keep the camera at least 1000m from the origin so there is no danger of seeing anything I shouldn't. Even better would be to just cull such renderables.