Just weeks after Nvidia was rumored to have cheated with their drivers the ball is now firmly in ATi’s court as they’ve been accused of doing exactly the same. Obviously neither manufacturer will tell you straight up that they reduced image quality in order to boost performance. Both will sugar coat their responses and come up with all sorts of excuses to soften the blow, at the end of the day any reduction in image quality to up the performance is regarded a cheat by most. As we mentioned before we’re not always against such optimizations or shortcuts as we’d rather call them, because sometimes it is utter nonsense to render every pixel fully. In this case the discussion is about trilinear filtering, let me quote ATi’s official response first before taking a closer look at what ATi has been doing with their filtering:
There has been a lot of discussion about our trilinear filtering algorithms recently.
The objective of trilinear filtering is to make transitions between mipmap levels as near to invisible as possible. As long as this is achieved, there is no "right" or "wrong" way to implement the filtering.
We have added intelligence to our filtering algorithm to increase performance without affecting image quality. As some people have discovered, it is possible to show differences between our filtering implementations for the RADEON 9800XT and RADEON X800. However, these differences can only be seen by subtracting before and after screenshots and amplifying the result. No-one has claimed that the differences make one implementation "better" than another.
Our algorithm for image analysis-based texture filtering techniques is patent-pending. It works by determining how different one mipmap level is from the next and then applying the appropriate level of filtering. It only applies this optimization to the typical case – specifically, where the mipmaps are generated using box filtering. Atypical situations, where each mipmap could differ significantly from the previous level, receive no optimizations. This includes extreme cases such as colored mipmap levels, which is why tests based on color mipmap levels show different results. Just to be explicit: there is no application detection going on; this just illustrates the sophistication of the algorithm.
We encourage users to experiment with moving the texture preference slider from "Quality" towards "Performance" – you will see huge performance gains with no effect on image quality until the very end, and even then, the effect is hardly noticeable. We are confident that we give gamers the best image quality at every performance level.
Microsoft does set some standards for texture filtering and the company’s WHQL process includes extensive image quality tests for trilinear filtering and mipmapping. CATALYST passes all these tests – and without application detection, which could be used if you wanted to get a lower-quality algorithm go undetected through the tests.
Finally, ATI takes image quality extremely seriously and we are confident that we set the bar for the whole industry. We don’t undertake changes to our filtering algorithms lightly, and perform considerable on-line and off-line image analysis before implementing changes. This algorithm has been in public use for over a year in our RADEON 9600 series products, and we have not received any adverse comments on image quality in that time. If anyone does find any quality degradation as a result of this algorithm, they are invited to report it to ATI. If there is a problem, we will fix it.
From my perspective, if ATI's claims prove to be true, there would appear to be a difference between Nvidia's ‘trilinear optimizations’ and ATI's. Nvidia's appear to be on all the time, regardless. ATi’s however alters its behavior depending on the texture and/or sample information passed to it by the application, this results in the ATi approach providing full trilinear when there are larger variations than normal between mipmap levels, and this is where its required most, and less than trilinear when the variations in mipmap levels aren’t that large.
Upon closer inspection the type of trilinear filtering the X800 series uses isn’t something new, ATi has had a graphics card for the past 12 months that was doing it all along, the Radeon 9600, we just failed to notice it. We’re talking about the graphics cards using the ATi RV3XX series processor, whereas the R3XX (Radeon 9500/9700/9800) had a straightforward implementation of trilinear filtering, RV3XX (Radeon 9600) has had this from day one. And not surprising the R420 draws its texture filtering capabilities from RV3XX, rather than from the R3XX, hence it can do the same thing.
At the end of the day there are just a few questions that come to mind when evaluating to what extend ATi’s shortcuts affect image quality. Firstly when does the R420 engage full trilinear filtering? Or rather, what is considered a large enough variation between mipmap levels for the R420 to use full trilinear filtering? Obviously the next question would be what filtering is used when less-than-full trilinear is used? However, whatever the answers to these questions may be, we're still faced with the realization we've been looking at the Radeon 9600 for about a year, and haven't noticed any image quality issues. That further strengthens the notion that a shortcut, optimization, cheat, or whatever you'd like to call it, that only shows up when you scrutinize the screenshots with a microscope isn't really an issue.