Thank you for expressing these ideas so well. This has been my reaction to the comparisons I have seen so far in reviews of the X800 XT. Many reviewers seem to hold intense hostility toward NVidia and relish proclaiming the superiority of ATI's new card. In one review I even saw repeated instances of a 5-10% frame-rate advantage for the 6800 being called something like "ekes out a slim win" or "leads by an insignificant margin," while a similar advantage for the X800 was hailed as "a wide margin" etc. To me, the true comparison should be between the 6800 Ultra and the X800 Pro, not the XT. The XT seems to be a cobbled-together trump card thrown in at the last minute to steal the spotlight away from NVidia. That's perfectly legitimate business strategy, but if we're going to give it credit, let's compare it to the higher-clocked 6800 that should be revealed any day now. And when we do, let's not stoop to snide comments about NVidia throwing out a hasty desperation product to save face. ATI delayed the launch of the X800 series by more than a week because they knew that the X800 Pro wasn't going to cut it.
Aside from the bias and the marketing war, the main points are the ones you express so well--the NVidia drivers have a lot of room for growth, and so do the 6800's clock speeds. Let's see what happens when it's running at 500Mhz core.
Not to beat a dead horse, but I just can't fathom the hostility of many power users toward NVidia. Do they want the company that brought them the TNT, the GeForce, and the NForce chip sets to fail? Do they feel they were ripped off by the value they got from these products? Do they think ATI will keep pushing the frontiers of graphics power if it has no serious competition? NVidia has had one serious stumble in all its years. I have used and enjoyed good value from products by both companies, and I hope they both prosper for years to come.
Want to enjoy fewer advertisements and more features? Click here to become a Hardware Analysis registered user.
Well nice conclusions. You are assuming both IBM and TSMC processes has the same quality. For instance TSMC is using low-k, when IBM is not. There are differences in different manufacturers prosesses, that cannot be bypassed simply by assuming both have the same limit, that is 600 MHz. Second assumption is that you basically assume both NVIDIA and ATI is using 100 % same pipeline structure. Well they are definately not. Thirth assumption is, both NVIDIA and ATI is using 100 % same building tools and techniques. Fourth assumption is, both NV40 and R420 is one and the same chip and manufactured with the same process.
So you cannot draw conclusions and base your assumptions directly looking at the micron number. Both use different manufacturer, different pipeline and chip structure/complexity and most likely some what different design methods. When pixel pipelines get deeper and deeper also the pixel pipeline's complexity goes up. This gives room for different kind of approaces. This has affect on clockrates.
NVIDIA has seem to chose complex chip with good IPC and alot of features. ATI has taken a bit different road with simpler structure with less features that allows higher clock speeds but the end result in games today is still pretty much the same.
When the 420's overclocking results will come, i bet that there is no big difference compared to NV40's results.
For now all that I can go by are assumptions, and my experience as a MSEE with chip manufacturing. Neither company will tell you exactly what they have in the works of course. What I can tell you about ATi is this however, they're using TSMC's Black Diamond, or to be more specific their CL013LV process, which is their high speed 0.13-micron process. ATi of course had a proven track record with this process as they used TSMC's low-K/0.13-micron to manufacture the Radeon 9600XT.
As for IBM, that'll manufacture Nvidia's processors, IBM has been manufacturing PowerPC processors for a long time and their process, and process technology, is regarded as one of the world's best. IBM has dedicated a considerable amount, I hear rumors of up to 75% of their total capacity, to manufacturing NV40s, capacity that was before used for internal consumption or for PowerPC and other dedicated processors.
I'm not saying they have indentical chips or pipeline architectures. Fact remains that a processor with >150 million transistors manufactured at 0.13-micron will have a certain die size. That results in a certain time needed, measured in nano seconds, for the signal to propagate from one end of the chip to the other. This time simply limits the maximum clockspeed which can be calculated to be around 600MHz.
As for overclocking results, I already hinted about that in our article, our ATi sample managed a mere 5MHz overclock, whereas the Nvidia sample was able to handle 20MHz. I'm quite sure that's not what you'll see with actual shipping products, results should be better as the process matues, but you can definitely see that there's more headroom for Nvidia simply because of their higher IPC, every MHz gives them more relative performance, whereas with ATi you'll need bigger leaps to get the same net result.
And indeed, what matters at the end of the day is the result, how they got from A to B isn't really important to the end user. It does however give me and other journalists/analysts some insights into where they'll be heading and what headroom/scalability is built in.
I don't like to be drawn into brand loyalty disputes as such but I do take exception to the analogy of a ruby compared to a diamond. This would tend to imply that the NVidia 6800 is of a superior architecture to that of the X800 and that it simply needs new drivers to clean up what issues it now has.
Both the NV40 and the R420 are new designs that are based on what has come before and both companies should be commended for creating designs that double the performance of what came before. What I find interesting is how the two companies designs have begun to converge towards one another. Rather than coming up with radically different approaches both companies have taken similar paths in thier solutions to building a next generation GPU.
Of course this is driven by the standards that programmers are using to design next gen games and there are only so many ways you can design a GPU that meets DX9 and Open GL requirements. There are obviously differences between the two designs. NVidia has chosen to implement PS3.0 and 32bit precision where as ATI chose to stick with PS2.0 and 24bit precision, apparently saving 32 bit design for improved fabrication process's.
NVidia's design choice has obviously created a much larger power demand resulting in a more cumbersome 2 slot design that requires a 480 watt PSU while ATI's less demanding design only requires a single slot and a 300 watt PSU. Both cards have amazing performance and the NVidia 6800 Ultra does get higher numbers in some of the benchmarks but the ATI cards win in some of the benchmarks and their single slot solution isn't as bitter a pill to swallow.
NVidia has the 6800 GT, a single slot solution with only a 300 watt PSU requirement. It does quite well in the benchmark testing, only slightly less than the X800Pro, and I would venture that those that are NVidia faithful will flock to this card. The reality is that only a select few will be able to afford the $500 plus that both ATI and NVidia are asking for their premium top of the line products. The Radeon 9800 Pro only became a main stream video card when the prices dropped into the $200 range.
Of course with these new cards doubling or tripling performance over last generation cards such as the Radeon 9800 and 9600 series and the NVidia 5900, 5800, and 5700 series there will likely be many more takers than with previous next generation products. Any one want to buy a Saphire 9800 Pro cheap? : )
I will admit to being a fan of the Radeon series and I won't deny that I have had some issues with NVidia's business practices in the past but I think I am an open minded individual and am more than willing to give credit where credit is due. NVidia has come out with an excellent performing GPU and while I believe the current design of the 6800 Ultra is cumbersome and power hungry to the extreme I believe that once the other 6800 designs hit store shelves NVidia will do very well with this latest entry.
ATI should get credit for not getting carried away with winning the frame rate wars. With the X800 series they have come out with a solid design that easily competes with NVidias product and yet still remains a sensible application of next generation design goals. Both the X800Pro and the X800XT are single slot solutions with reasonable power requirements and enough performance to justify an upgrade no matter what GPU you currently have.
True the NVidia 6800 Ultra is like a diamond in the rough and the ATI X800XT is what that diamond looks like once it's been cut and polished. <grins> Seriously tho, both cards offer amazing leaps in performance and visual quality. Where the Radeon 9800 and the NVidia 5900 hinted at cinematic realism both of these new cards deliver. The 6800 series does offer PS3.0 and 32bit now but I truely believe that before games start to really take advantage of those features we will see a whole new round of GPU's coming down the pike.
Good post, and I agree that pitching one card as a polished ruby and the other as a rough diamond might skew the balance a bit as it hints at the potential locked away inside the GeForce 6800 Ultra being far greater than what the X800 will ever be able to reach. My views on this are as expressed in the column, but let me elaborate a bit further so you see where I'm coming from.
ATi engineered an amazing architecture when they designed the R300 and improved upon that with R360 architecture. With the R420 they've basically pulled out all stops and unlocked the full potential of the R300 architecture. They went to great lengths to make sure that the R420 was as efficient as possible by fine tune every single bit of it. That has paid off for them, in more than one way, performance is 2x that of the 9800 XT and on par, or better, than Nvidia's NV40.
However, if you fine tune something to the fullest you can't do no more than that, all you can do is shrink the die by moving to a smaller process and increase the clockspeeds. That'll buy you some extra performance, from what we've seen in the past at least 10 to 15%. However as clockspeed increases you'll have to keep in mind that not all parts of the chip will scale accordingly, you can't simply clock it at twice the speed. Thus some rework will need to be done to slow down/speed up parts of the design to keep it functional.
That is what I refer to as a polished ruby, it is a design from a previous generation, tuned and improved to deliver it's fullest potential, but at the core it is still technology developed over three years ago.
Nvidia uses a design which reminds me a lot of 3dfx's Rampage, that never made it to the market, but indeed whose intellectual property now belongs to Nvidia. It is a design which is programmable and parallel on all counts and is extremely flexible. But it is also new, of course good bits of previous architectures have been used, but the core processing engine is a new design. By looking at the design specifications it is evident that Nvidia cut no corners in terms of hardware, every single bit that matters is there and then some.
That is what I refer to as a rough diamond, the potential to shine is there, all the work is done on the hardware side, but due to its new massively parallel and programmable nature drivers need work to mature and unlock more of the performance potential locked within.
Alright, on that count with the added detail to your analogy I can agree. I don't know if I would characterize the NV40 as brand new technology but I will agree that the R420 has more of the R300 in it than the NV40 has of the NV30 or NV25. Or to be more accurate the NV40 has more new technology than the R420.
Whether or not NVidia will have time to polish it before ATI comes out with it's next generation GPU only time will tell. In the mean time they are pretty even up when you weigh in all the pluses and minuses for each GPU.
What I'm really wondering, and whats making me wary about purchasing a Radeon X800 card is the rumor/news that ATI is coming out with a new generation of cards by the end of the year, or very soon anyway, that is hopefully going to be based on a new architecture. I was wondering if anyone could verify this? If this turns out to be true, the X800 cards may very well be a really well optimized and streamlined and overclocked yesterday's technology, something that I won't really be interested in, and will make me lean towards the Geforce 6800.
I do agree with alot of what this article has pointed out though, and the new beta Forceware drivers that boosted Farcry's performance a ton really adds to the idea that drivers could be the key that will make the Geforce ultimately more powerful and versatile than the X800.
Myself, I am not going to buy either anytime too soon. There is also another consideration for the future potentials of these cards. We all know that the X880 will be Pci-express native, but the Geforce 6800 will have a bridge. Since my next total system upgrade is defiantely going to be a A64 Dual Channel/Pci-express board, the PCI-express performance of these cards may be a significant factor towards my purchasing decision. Though the Geforce's AGP/Pci-express bridge thing may turn out to be very versatile for the card (people can buy the card, and upgrade their system and still use their card, while if people bought the X800 and upgraded to a Pci-express mobo, they may be losing out on potential gains in performance).
I think this new generation of cards is the worse to buy. The pixel shading is not advanced at all on the ATI side, while Nvidia has the PS 3.0. However, we are not going to see any advantage of that at all for a long while. PS 3.0's biggest feature is extending the ammount of pixel instructions from 512 to 65000 or something around that number. But games are NO where near 512. Half-Life 2 maxes at 40 instructions.
However, by the time the next generation cards come out, we will be in the full swing of 64-bit technology, possibly with Video cards utilizing 64-bit technology, plus in the full swing of PS 3.0 or higher -- maybe PS4.0 or and advanced PS3.x, like PS3.1 with increased quality. The generation after the new current one will be approaching true realism in graphics. I think those cards will the be the ones to get. Not this "stepping stone" generation.
Waiting THAT long isn't feasible at all either. 500 dollars, no matter how you look at it, is really not that much money at all. The idea of having spent 500 dollars, but knowing that same 500 dollars at the same time could have gotten a better performing card is the real kicker, something we're all trying to figure out and avoid right now.
I'm sorry, I'm not an expert on gemstones, so if you have any comments on that do enlighten me, someone pointed out to me already that a 24-carat diamond isn't entirely accurate. But for all intends and purposes it was an analogy I used to get a point across, I'm sure you can relate to that.
Personally I think that doubling the performance on these new cards has made PS2.0 or 2.1 usable finally and that is what I was refering to when I mentioned that cinematic realism was finally a possibility. Now of course I am refering to what the video card companies like NVidia and ATI are calling cinematic realism and not true photo realism. Obviously we are several years away from games that will be able to achieve that goal.
For one thing we simply do not have enough system memory or storage memory bandwidth to support photo-realism in todays PCs, let alone graphics systems that can render such realism. We have reached the performance levels necessary to actually use PS2.0 and FSAA in complex scenes, with these latest video cards, and still maintain reasonable frame rates. This really wasn't possible even with a Radeon 9800 XT and a 3.2 P4.
I think it would definetely be fair comment to say that x800 is a stepping stone to ati's pci express card. If ati, or any gpu manufacturer just brought out an updated card for pci express it wouldn't really have that much impact - i think an all new uber chipset combined with the new connection standard will really grab attention and i think that is exactly what ati are going for.
Athlon xp 2500
MSI K7N2 Delta
ATI Radeon 9800 Pro
15k 3c mark 2001 5.5k 3d mark 2003
You're right, as I mentioned in the conclusion of our X800 evaluation with both these new cards you need a top of the line system with at least 1GB of memory to make full use of them. Even a 3.2GHz Extreme Edition Pentium 4 will be taxed to almost 100% when running games such as Far Cry at 1600x1200 with AA and AF enabled and all graphics options set to maximum. The graphics card used to be the one doing most of the work, but now the processor is once again becoming a bottleneck, I'm sure we'll see better performance once Intel manages to break the 4GHz barrier and AMD launches their new A64 CPUs. I'll have to add that in our testbed the 3.2 EE fared better than the 3.4 Prescott we initially thought of using, the long pipeline certainly made it perform worse than the EE, even with the additional 2MB L3 cache disabled.
true it's been quite a while since we saw boosts in graphics performance like this.
the obvious being 3dfx that started all of this splender then nvidia pushing the cart with tnt and later ongoing gforce series.
as I'm growing older the importance of this computer part is fading rapidly( for me that is), I realy am happy Ati included once more hardware acelerated video decoding and improved upon it a bit more since the 9xxx series. that is something i use quite a lot.
just for that my choise would be ati, the power issues point me to ati aswel and just the sise make ati the clear winner for the ever more popular formfactor setups like the shuttel ect.
and since game development takes ages all we can enjoy now when paying up for the nvidia choise is techdemo's and a game called farcry that has tree's poping up out of no where.
i still have an issue with production costs of both cards
ever since the fx series nvidia added a large chunk of metal to it's boards adding to the price and sise of the pcb's. in retial both cards endup being equaly expensive.
realy that just doesnt add up in my book. either ati board makers are making huge profits
or nvidia board makers are selling at a loss. perhaps us the consumer cattel is being made to pay up by set prices.
just to make this extra clear
im not all that biased towards ati or nvidia
ati clearly has the better product here
games like tribes vengance halflife 2 and doom 3 will only be here when an other product cyckle has hit the shelfs.
Your analogy to 24 carat diamonds doesn't work. When referring to gemstones, "carats" are the weight of the stone. A 24 carat diamond is much bigger than anything anyone on this site will ever buy a girlfriend/wife. In terms of precious metals, specifically gold, carats are a measure of purity, with 24 carat gold being "pure". Most gold sold on the market is 18 or 14 carat, since 24 carat gold is too soft for use in jewelry.
The comment about rubies being rarer than diamonds is incorrect, though high-quality rubies can be more expensive than diamonds. Saphires are actually the most expensive and rare of jewels, when comparing the same size and quality.