If there's one person you can trust regarding 3D graphics technology, it's John Carmack. Carmack not only spends every waking hour bashing away at his keyboard to bring you great games such as the Quake series and the upcoming Doom 3, but he also frequently speaks out to the community regarding various issues. This is one such occasion. As i'm sure you are all aware by now, Futuremark is accusing Nvidia of modifying their drivers in order to gain an unfair advantage in 3DMark, arguably the most widely used 3D benchmarking tool ever created. Here's what Carmack had to say:
"Rewriting shaders behind an application's back in a way that changes the output under non-controlled circumstances is absolutely, positively wrong and indefensible.
Rewriting a shader so that it does exactly the same thing, but in a more efficient way, is generally acceptable compiler optimization, but there is a range of defensibility from completely generic instruction scheduling that helps almost everyone, to exact shader comparisons that only help one specific application. Full shader comparisons are morally grungy, but not deeply evil.
The significant issue that clouds current ATI / Nvidia comparisons is fragment shader precision. Nvidia can work at 12 bit integer, 16 bit float, and 32 bit float. ATI works only at 24 bit float. There isn't actually a mode where they can be exactly compared. DX9 and ARB_fragment_program assume 32 bit float operation, and ATI just converts everything to 24 bit. For just about any given set of operations, the Nvidia card operating at 16 bit float will be faster than the ATI, while the Nvidia operating at 32 bit float will be slower. When DOOM runs the NV30 specific fragment shader, it is faster than the ATI, while if they both run the ARB2 shader, the ATI is faster.
When the output goes to a normal 32 bit framebuffer, as all current tests do, it is possible for Nvidia to analyze data flow from textures, constants, and attributes, and change many 32 bit operations to 16 or even 12 bit operations with absolutely no loss of quality or functionality. This is completely acceptable, and will benefit all applications, but will almost certainly induce hard to find bugs in the shader compiler. You can really go overboard with this - if you wanted every last possible precision savings, you would need to examine texture dimensions and track vertex buffer data ranges for each shader binding. That would be a really poor architectural decision, but benchmark pressure pushes vendors to such lengths if they avoid outright cheating. If really aggressive compiler optimizations are implemented, I hope they include a hint or pragma for "debug mode" that skips all the optimizations."
Well, this is clearly a moral dilemma for both ATi and Nvidia. In my opinion, as long as they include a driver option to disable any optimizations and thus, ensure that reviewers are able to make fair comparisons between products, Nvidia's actions are perfectly justifiable.