This is a journey on understanding how to create a 3D graphics engine. I realized I was interested in creating a Terrain rendering engine and so I was perusing through the 3D Terrain Rendering book that I had. Little did I realize at the time but there were significant (in my mind) helper classes that I wanted to understand before moving on. I realized I wanted to understand the camera with OpenGL. After reviewing the Camera code, I then realized that I needed to brush up on some vector math. And so the path has lead me to where I am starting at the beginning with 3D Math Primer for Graphics and Game Development, copying the appendix for the Math Review section. I am truly starting at the beginning so that I can fully understand what awaits me.
Goal:
Finish Chapter 1 of Essential Mathematics for Games (Floating Point Numbers)
At first, going over the first chapter on floating point numbers was a laboring task. I convinced myself that I needed to go through the chapter, and by the end I had a newfound appreciation for the IEEE Floating point standard. The basic representation of the IEEE standard of a 32-bit single precision floating point number is 1 bit for the sign, 8 bits for the exponent, and 23 bits for the mantissa (base number).
One of the questions that I had was where was the sign bit for the exponent. That is when I learned that as part of the standard, there is an inherent bias of 127 built into the value of the exponent. This gives a range of -127 to 128. I thought that was clever in not using up an entire bit just for a sign.
The other very interesting thing that I learned was that there is an actual range where if you have two very large values with a very small difference, then their value would fall into a certain range where the accuracy falls off, called the hole at zero. Before the floating point standard, systems would basically implement a scheme where they would just zero out any values in that range, called flushing to zero. The problem with this is that you obviously lose accuracy by doing this. The solution for the IEEE floating point standard was to create a gradual underflow, where denormals (denormalized numbers) are generated. Apparently, this part of the standard was controversial because of how expensive it was to implement this generation of the denormals.
The author mentioned that they came across this actual problem. I implemented the debugging code that they used to check the performance of when they have to generate the denormals. I am using Visual Studio 2008, running on an Intel Core i7 920 @ 2.67GHz on Windows Vista 64.
Code:
#include #include "windows.h"
float doSomething(float fVal) { return fVal; }
int main (int argc, char *argv[]) { int i = 0; float fVal = 0.0; int totalCycles = 100000;
unsigned long startTime = timeGetTime();
for( i=0; i < totalCycles; i++ ) { fVal = doSomething(1.0e-37f); }
unsigned long endTime = timeGetTime();
printf("Start: %lu, End: %lu, The time difference for normal range %lu\n", startTime, endTime, endTime - startTime);
unsigned long startTimeDenormal = timeGetTime();
for( i=0; i < totalCycles; i++ ) { fVal = doSomething(1.0e-38f); }
unsigned long endTimeDenormal = timeGetTime();
printf("Start: %lu, End: %lu, The time difference for denormal range %lu\n", startTimeDenormal, endTimeDenormal, endTimeDenormal - startTimeDenormal);
return 0; }
Output: C:\Game_Development\Projects\FloatingPointTest\Debug>FloatingPointTest.exe Start: 57699468, End: 57699470, The time difference for normal range 2 Start: 57699471, End: 57699499, The time difference for denormal range 28
I was amazed at the difference in performance. Basically a magnitude difference in time to calculate floats when generating denormals.