Please register or login. There are 0 registered and 1738 anonymous users currently online. Current bandwidth usage: 326.30 kbit/s November 30 - 07:52pm EST 
Hardware Analysis
Forums Product Prices

  Latest Topics 

More >>


  You Are Here: 
/ Forums / Rambus PC-1066 and PC-1200, Pushing the Envelope

  Questions about the technology and the testing..... 
 Date Written 
Robert Kropiewnicki Nov 15, 2001, 03:18pm EST Report Abuse
Before I comment on the results, could you clarify a few things for me regarding?

I know that the FSB of a current P4 chipset is 100MHz QDR (effectively 400 MHz). Is PC800 memory actually 800 MHz (as opposed to being double pumped, quad pumped, representing bandwidth instead of speed like PC2100, etc)?

If so, does the speed of the memory bus automatically adjust depending on the memory installed (PC600 to PC800 to PC1066 to PC1200)?

Does the FSB speed have to be some multiplier of the memory bus or can they be asynchronous (ex. KT133 chipset.....100 MHz DDR FSB, 133 MHz memory bus)?

Want to enjoy fewer advertisements and more features? Click here to become a Hardware Analysis registered user.
Dan Mepham Nov 15, 2001, 07:06pm EST Report Abuse
>> Re: Questions about the technology and the testing.....
PC800 RDRAM actually operates at 400MHz, but can transfer twice per cycle (a la DDR), for an effective 800MHz. PC600 is actually 300MHz, double pumped. And so on.

On the 850 chipset, yes, the memory bus actually operates at a multiplier of the system bus. In the case of the 850, the options are 3X and 4X. That is, the memory can operate at 3X the system bus (3x100MHz = 300MHz, PC600), or 4X the system bus (4x100MHz = 400MHz, PC800). Remembering, like you said, that the P4 FSB actually runs at 100MHz, but transfers 4x per cycle (400MHz effective).

On most boards this would usually be auto-detected (3x multipler when PC600 is installed, 4x for PC800), or can be changed in the BIOS.

When the P4 moves to a 133MHz FSB, the likely RDRAM matchup would be with PC1066 (actually 533MHz). That would need a 4X FSB:Mem multiplier (4x133=533). At that point, though, Intel will probably launch an asynchronous RDRAM chipset. The current P4X266 and SiS P4 chipsets are asynchronous when using DDR266 or DDR333. DDR200 is actually synchronous (both the FSB and the memory clocks are at 100MHz).

Hope I didnt miss anything. :-)

Dan Mepham
Dan P Nov 15, 2001, 11:44pm EST Report Abuse
>> Re: Questions about the technology and the testing.....
Judging by this review, it seems as if the Asus P4T-E has RAM clock generators capable of producing 533MHz to the RDRAM. Is this the only socket 478 motherboard that has clock generators which support this? I would really like the Abit TH7II, I know it can run a 533MHz FSB, but I don't know if it's RAM clock generators can provide 533MHz to the RDRAM. Do you know anything about this?

In addition, do you know what core voltage the new Northwood P4 will run at?


Brian Lonsdale Nov 16, 2001, 03:59am EST Report Abuse
>> Re: Questions about the technology and the testing.....
I have a few questions and issues to raise about your review and this technology.

Firstly, why do you use a resolution of 640x480? I realise the intention is to show the differences in bandwidth amongst the memory, however the test itself is meaningless. I would never consider playing Quake III at this low a resolution and I don't believe anyone who reads these articles would be, either. The minimum standard to test at should be 1024x768 it because the result would not show such a large difference? You briefly stated that the video card becomes the why do I need PC1200 and not PC800 if the bottleneck is elsewhere? Why do I need 200+fps at this resolution?

Testing Office applications is getting more and more redundant nowadays....are there no better tests to show what this new technology can do? I don't need Office to either load any quicker than it does or run any faster. There's only so much I can do in a day and having it load a whole second faster is only going to be so much thinking time!

Don't get me wrong, I like the technology, I like the fact that there's choice - I like the fact that something new and faster is coming out almost every other week. What I don't like is the inherent cost that goes with it...RDRAM is still more expensive than DDR, and when coupled with an equally configured system as far as possible, then a system based on DDR is still cheaper and would perform better pound for pound. What kind of cost are these PC1200 modules going to come in at? It's going to be a damn sight more than even the next generation of DDR. I could save that money and buy NVIDIA's next generation videocard and then play Quake III at 640x480 and get 300+fps instead of 200+fps

To me, DDR seems to be scaling better with the rest of the PC components - a modest increase each time which complements the other technological improvements. Why do we need a memory bandwidth that great? We don't...but Intel's pegged its CPU release schedule on RDRAM. The only way Intel can get the performance delta to improve in its CPUs is to get the memory bandwidth up...and the only area I can think of which needs this kind of performance delta is servers. Which makes games and office applications mostly redundant! This is why I think testing this technology using gaming benchmarks and office benchmarks is pointless!

Gianni Rodosi Nov 17, 2001, 12:43pm EST Report Abuse
>> Re: Questions about the technology and the testing.....
All good points, Brian.

In Intel's and Rambus' defence i can say that someone has to study a mid and long term plan and things can't be incrementally improved forever.

Intel has often had the strenght and cunning to do so.

In my opinion, though, you can't always guess right nor does Intel ;)

Let's see.

BTW, have you noticed that today synchronous designs are prevalent - because they are better performing - and that this fact, in a way, ties certain types of memory to certain types of processors ?

See you.

Dan Mepham Nov 17, 2001, 01:25pm EST Report Abuse
>> Re: Questions about the technology and the testing.....
Hi Brian,

In answer to your first point, I think you're right, but I think you're wrong too. The problem here is that there are two very different types of benchmarks, and people don't usually realize that.

There's the practical benchmark. This is a benchmark designed to directly show how this technology affects YOU. Using Quake 3 as an example, this would involve running at 1024x768x32, using video cards and systems representative of what most readers would have. And in this test, as you said, other factors would bottleneck, and the RDRAM speed would likely make no difference. This is what you're after, I think. And there's nothing wrong with that at all. It's totally correct, and it's what you want to see.

But it's not the ONLY correct type of benchmark.

What Sander did was show a more theoretical benchmark. A benchmark of the technology, rather than of 'real world' performance. The purpose of such a benchmark is to isolate and magnify differences in the technology, so it can be discussed from an engineering standpoint. Naturally this type of test may have no bearing on real world performance. We didn't say it did. I see these benchmarks more as educational tools. And I dont think there's a problem using them, as long as we're clear about what they are. Think about a benchmark like CacheMem or Linpack. If something scores 20% higher in CacheMem, it almost certainly DOESN'T mean it'll be 20% faster in real world usage. But does this make CacheMem any less useful as an educational tool?

This is an interesting issue for me, because EVERYONE seems to have their own idea of what a benchmark SHOULD be, but no one seems to asknowledge that other people have other ideas. I see no problem with theoretical/educational style benchmarks, or with practical/real world benchmarks, so long as we're all clear about which is which, and we don't try to pass one off as the other.

This has come up before. Some sources have been claiming Photoshop is an invalid benchmark because it uses SSE, but not 3DNow! instructions, giving Intel CPUs an unfair advantage. To me, that's ridiculous. You can't say it's an invalid benchmark. That's like saying a minivan is a useless automobile, because you don't have kids. Yes, it's useless to you, but not everyone is you. You can, however, say it's an invalid THEORETICAL benchmark for comparing Intel vs. AMD because it's unfairly slanted toward Intel CPUs. But as a real world benchmark, it's totally fine. If I use Photoshop, I want to know that it goes faster on an Intel CPU, regardless of how 'unfair' that may be, or how it's 'cheating'. For me, it's faster on a P4, and that's that.

In that same respect, you can absolutely claim that the Quake 3 benchmark Sander used is totally invalid as a REAL WORLD benchmark. It is. Absolutely. But that doesn't make it globally useless. It's still an excellent theoretical/educational benchmark, and I think that's how Sander was using it.

We all just need to realize that there are two very distinct types of benchmarks, and that both of them are absolutely acceptable, and we need to be more clear about which is which.

Dan Mepham
Dan Mepham Nov 17, 2001, 01:28pm EST Report Abuse
>> Re: Questions about the technology and the testing.....
Perhaps in future articles, we could divide benchmarks into two sections; Theoretical and Practical.

Theoretical/Educational tests would include stuff like CacheMem, and Quake 3 at 640x480x16.

Pratical would include items like SYSMark, and Quake 3 at 1024x768x32.


Dan Mepham
DaveO Nov 17, 2001, 02:29pm EST Report Abuse
>> Re: Questions about the technology and the testing.....
Sounds like a good idea Dan. It seems that there's often a bit of controversy surrounding HA reviews and editorials (although no doubt all sites suffer this), so anything that more clearly explains the results and conclusions you guys come up with may help readers understand more easily.

In the case of this particular article however, to me the intention of showing theoretical performance gains was fairly clear, thus running Q3 in 640x480 to factor out the video card as much as possible. However, perhaps this wasn't as obvious as it could be, and if you'll pardon the slightly underhand use of snippet quotes:

"To evaluate how much of a performance increase PC-1066 and PC-1200 offer when running multimedia tasks such as games and other streaming content we'll be using Quake III."

Might suggest this is in some way a real-world test. This is of course offset at the end of the same page by:

"...this is largely due to the fact that at higher resolutions the video card becomes a bottleneck rather than the CPU or the memory subsystem."

But clearly these two slightly conflicting statements have managed to confuse some readers and putting it all under the heading 'Theoretical' would probably have eliminated this. However, overall I personally quite liked the article, in this time of slowdown it's nice to see something that isn't yet another a heatsink review ;)

What does this button do?
Dan Mepham Nov 17, 2001, 03:05pm EST Report Abuse
>> Re: Questions about the technology and the testing.....
I realize that this is largely a branding issue. Perhaps it wasn't clear enough to readers that that particular benchmark was intended to be theoretical only (and that, yes, we know no one plays Q3 @ 640x480). So perhaps classifying all our benchmarks as either 'Theoretical' or 'Practical' would help make intentions more clear. This is something we can work at.

Dan Mepham
Brian Lonsdale Nov 18, 2001, 10:50am EST Report Abuse
>> Re: Questions about the technology and the testing.....
Thanks for the replies, everyone. I agree that the tests you run should be split between headings under theoretical and practical - I think this would keep each of the tests in their respective contexts.

I still think that using something like a gaming benchmark at that resolution is inherently the wrong benchmark, though. Using something that is strictly theoretical that isolates memory performance would be much better (such as Linpack etc) because it's more suited to the task. Although Quake III at that low a resolution does show some differences, it's just not the best test to evaluate the technology. The linpack tests evaluate memory performance only and spit out numbers relating to memory performance - video cards just don't come into the equation. To me, using fps to evaluate memory bandwidth is like saying the quality of the leather used in your minivan makes it a good way to evaluate the differences between your minivans. It's simply not the right criteria to base an evaluation on. It's part of it, certainly, but just not the best. Gaming benchmarks are best suited to the practical measurements at an appropriate resolution, with some commentary and evaluation on their real world difference and application. Which is where my comments about the practical uses for RDRAM come from. BTW - if you've read the Inquirer recently, there was an article on there saying Intel have said RDRAM for desktop is all but dead, save for some specialist areas.



  Topic Tools 
RSS UpdatesRSS Updates

  Related Articles 

A weekly newsletter featuring an editorial and a roundup of the latest articles, news and other interesting topics.

Please enter your email address below and click Subscribe.