Welcome back! I’m sure you noticed that Nvidia just launched their GeForce FX as a select number of sites posted their reviews. We weren’t part of the ‘official’ sites for obvious reasons as we’ve been fairly critical about Nvidia’s tendency of hyping their products much beyond what the product is actually able to deliver and thus they didn’t send us a sample. That doesn’t mean we didn’t have one on time, one of the launch partners shipped us a sample, unfortunately that sample did not make it through our first round of testing thus we had to send it back, we were only able to get a new card early this week and thus missed the launch window.
But we’ve worked hard to get a review out of the door quickly, and we frankly did not stick by the official ‘review guidelines’ Nvidia sent along with their samples, but followed the same stringent procedures as with other products in our evaluation of the GeForce FX. Therefore our review is slightly different than others, as we felt compelled to, once more, put their performance claims to the test and do a reality check with the actual shipping product. Here are a few clips from the article as we’re finishing up the review to be posted today or at the latest tomorrow:
But what’s new about the GeForce FX? Well, frankly, not a whole lot. Despite the massive transistor count, no less than 125-million, the GeForce FX brings only a few new features to the table, the first being the much touted CineFX Engine. The CineFX engine is just another pretty name for the GeForce FX’ vertex and pixel shaders and the addition of DirectX 9.0 support, no more, no less. Intellisample Technology is the next buzzword which basically covers Nvidia’s new anti-aliasing and anisotropic filter implementation in the GeForce FX, and there’s frankly also nothing revolutionary about it, just another evolutionary step over the previous generation. The one feature that does seem to weigh in heavily on the ‘new and innovative features’ scale is the use of DDR-II memory running at no less than 500MHz clockspeed which yields an effective clockspeed of 1GHz. Unfortunately Nvidia chose to use a 128-bit wide databus for this new memory architecture, effectively putting the brakes on peak bandwidth and possibly overall performance, but we’ll get to that later.
Oh, and did I mention the cooling solution takes up two slots, the AGP and the adjacent PCI slot, not because we’ll get triple DVI or video-in/out, but because the cooling solution needs the 2nd slot for air intake, how’s that for a change? Nvidia pitches it as a ‘radical and revolutionary patent pending dynamic thermal management solution, that’s an enthusiasts dream and gives gamers the right to brag’ we’ll just call it like it is; a bulky, noisy heatsink that could probably double as a hairdryer and really does nothing to impress us. If there’s one thing to brag about it is the fact that the noise of the fan will probably drive people nuts faster than any other videocard before this one, whether that’s something positive we’ll leave up to you.
We’ll leave you with that for now and get right back to work as the whirring noise of the GeForce FX’ heatsink fan has just gone silent indicating that another benchmark run has just been completed. We’ve got a few more runs to do before we have a somewhat complete picture of what the GeForce FX brings to the table, up until now I can honestly say I’m not too impressed, the Radeon 9700 Pro is keeping firm ground amidst all the MHz violence. Only if the fan on the GeForce FX would be a little more powerful it could literally blow it away.