But isn't the point that AMD is doing this right whereas Intel didn't do it very well? I mean, why release a product that you KNOW is problematic. Also, why announce and paper launch a host of products on this problematic technology, only to recall them with egg on your face.
AMD is a success simply because their corporate image is not being hurt time and time again by a series of failures.
And why do they need to release faster clock speed processors? What they have out now holds their own, remember 1GHz on a P4 does not equal 1GHz on an Athlon.
When you translate all of this, it amounts to what is obviously happening. Every effort is being made to trick out current processors(extra cache, faster FSB, etc) to have SOMETHING to sell until they can get out their multi-core offerings. There is no affordable way to get past the heat/speed barrier at smaller element sizes, so processing must be split up. Any program that does not have some way of making use of multiple processors or at least multiple processing resources will fall out of the market. No matter how smooth or strong the hose, the firefighter can only pump so much water through one hose, and the higher the volume and pressure, the less likely that the hose can be handled by anything other than a mechanical device. Besides, one hose can only put water through one window at a time. The next step is to put multiple hoses on the fire, and this analogy is apropos in more than one way.
I believe that you would want to put some chloro-fluorocarbon in that hose...just don't breathe. The point is that expensive and complicated cooling solutions are for hackers, geeks, and such, and products aimed solely at them will not pay for multi-billion dollar/euro fabs. Somebody may break the 3 gigahertz or 4 gigahertz barrier with standard cooling at 90 and 45, but the limit is somewhere in that order of magnitude, meaning that multiple cores, sharing loads or running processes in parallel is the future on the desktop as it has been the reality in the server world for some years now. I'm not holding my breath on practical quantum processors operating on the desktop anytime soon. I believe that will happen at approximately the moment that fusion reactors come on line in the power industry. I'll get my high-resolution wall size screen for under $500 sometime before those other two.
Yeah, I'd have to agree, massive parallelism is the way to go, that's how nature has wired our brians and we seem to be doing alright. Problem with that approach is that massive parallelism puts huge demands on silicon real estate making it a valid, but expensive, solution. I'll have more on that tomorrow, with another editorial covering exactly that subject.
I assume that everybody is looking rather sharply at the SMT technology in the K8 and K9 processors from Alpha that we will now never see. At 45, there is an enormous amount of real estate to be used, so I do believe that the problem is not at all insoluble. At some point, we will need to go to some form of 3D/multilayer technology to allow the types of interconnection needed to enable more than a few channels of SMT on a single chip. I believe that the work being done by TransMeta, though only vestigial, may be the clue to a way forward. Our processing resources need to be more flexible and more dynamic, to appear more like that neural net we call a brain.
I'll be interested in your take on parallel processing. By the way, I arrived here through Mike Magee's domain, TheInquirer. I've been reading his stuff for a long, long time.
I've seen AMD's plans to produce the FX57 on the 90nm process. This will be a 2.8GHz CPU, and the plans are to move ALL CPU's to the 90nm process too. This is quite contradictory to your article. I have a 90nm Winchester core AMD64 and it's running at 2.3GHz currently. Under full load (running Prime95) I'm getting core temps of about 38 degrees. That's extremely cool for a CPU! If you look at AMD's thermal specifications, these CPU's put out far less heat than the 130nm CPU's do. Also what shortens CPU life the most is heat, not the voltage (although voltage will effect it a little bit)
Most Prescott cores are running around 60 degrees under load! Yes the AMD64 is running at a lower clock speed than the P4, and that argument is just like saying a 2.4GHz AMD is slower than a 3GHz P4. Remember when the Thoroughbred came out? It was also criticized for it's heat output. Heck I had one which was hitting 60 degrees under load. My next AMD core, a Barton, hit 42 to 44 under load.
This article sounds like it's written by an Intel Fanboy. I'm only biased to the best performing. My last computer before I got this had a P4C 3.4GHz. That was good, but my new AMD is better... now if only Intel would make their Dothan cores for the desktop...
Fanboy? I'm sorry? It is clear you miss the point, or didn't bother to read the article fully. AMD and IBM's own tech papers tell you that there are problems when running their 90nm processors at higher clockspeeds with higher voltages as explained in the article. If you are going to call anybody a fanboy go talk to them, I did not dream this up or anything but got it straight from the source. And where do you read I prefer Intel processors over AMD? You must really be reading way too much into this, try http://www.disney.com if this article was too confusing for you.
What is the need here for calling someone a fan boy. Yes, AMD does get more per cycle out of their processors than does Intel, but I just happen to be very interested in keeping AMD and Intel both in the business. If either one controls completely, my bargains will go away, as will the current rate of technological change. The point of the discussion here is that both AMD and Intel are hitting a speed/heat/voltage barrier at 90 and below. The barrier may be slightly different for each process and company, but it is definitely there for all of them. Yes, the barrier will drift downward with process optimization and with innovation, but I believe that it is inherent to the current technology, which I don't see being entirely replaced anytime soon.
Given that, we have to find different ways to use the current technology and we have an obvious candidate with which the industry already has significant experience...parallel processing. Putting multiple cores on a single chip is not new either, but it is new to mass production. It reduces the cost of parallel processing by reducing many of the things that are required to support multiple, separate processors on the same motherboard.
For the moment, it looks as though AMD has an advantage until the multi-core chips hit the market. Since Intel already has it's hyper-threading model, they may have an implementation advantage and a small performance advantage based simply on HT in two cores. AMD has an advantage in that they have a very efficient and modular architecture in place for interchip communication and for memory access. AMD does not need massive amounts of cache to maintain performance, leaving more room for processing and inter-core communication elements. The race should be very interesting, and as long as neither competitor is eliminated, shifts in balance should only benefit us...the consumers. I'm just glad that I have a ticket to the fight.
AMD is a business. Maximizing profit at a certain level of productivity sometimes forces less than ideal circumstances for the consumers. AMD needs only to stay a speed rating or two ahead of Intel to win the race. Jumping any higher will mean that AMD has taken their product to EOL well before Intel has a processor to fight back. Then when Intel matches speed in 6 months, AMD has no more headroom and must design a new product. You would want to stretch out the life of a product as long as you can staying just 1 step ahead. That gives you more time for economies of scale to kick in and more time to develop a new product for replacement at EOL.
Remember the old sayings that if AMD and Cyrix has not existed, Intel would still have us using Pentium 166 processors?
My guess is that AMD is not going to more chips on 90nm because it is not needed. They maintain a rather good healthy product and a rising market share. AMD knows that it will never remove all of Intel's market share and no matter how much better their products are, Intel is built into the mentality of the consumers. Moving to 90nm may increase chips per wafer, but if you do not have a larger market share demanding the capacity, it is a waste.
As a consumer, the process changes do not add value to a sellable product. Your major manufacturers do not list the processor's design specs so it does not matter if the speed ratings match. Some list cache sizes, but rarely do you see processor design specs.
Since you titled this story as a question, I would go ahead and say that 90nm is needed for dual core processors. AMD is capable of reaching desired clock speeds with 130nm. The lower speed 90nm processors are characteristic of the side effect of dual core chips, which is that they run at a slower clock speed. Would it be beyond reasoning to say the slower 90nm chips are done to help prepare the engineering for a dual core set and to help verify yields?
Anyone's guess could be right or maybe a combination of guesses are correct.
Your logic on marketing is essentially correct, if a bit incomplete. I believe that the 130 to 90 transition is also the final changeover from 200 to 300 mm in wafer processing. That is a huge incentive to make the changeover, because the cost to process each chip on the larger wafer is less. Now, I will be the last person to tell you that I can resolve the Gordian knot represented by supply, demand, and capacity, but I suspect that if AMD holds this speed advantage for a year or more, their market share will steadily if not necessarily spectacularly increase. If they can maintain that very visible market advantage while markedly reducing their production costs with both the 90 nm feature size and the 300 mm wafers, then they can again reduce price while maintaining profits....increasing market share...and the wheel rolls on.
Now, I would say that the speed/heat/voltage barrier at 90 is there, regardless of what is done with the technology, so rather than preparing folks for the lower speeds, it is a natural result of the technology that will force the companies to the multi-core chips to maintain the price performance curve. As a certain author used to say, TANSTAAFL.
Well given the performance stats we are seeing yes it is a success and considering that amd is planning to move to dual core cpus by mid next year there isnt any real overwhelming need to bump the cpu speed up by much more. They have already stated that the dual core machines will have cores running at a slower speed to reduce the overall heat of the chip. The fact that they are 2 cores instead of one is what will provide the extra bump in speed to continue to maintain its lead over intel because intel is somewhat behind with their dual core stuff.
So there yah have it. Why go to 3ghz and strain the process any further than really needed?
Hahaha, all this and 60 nm processors come out mid-2005. Oh joy! I can't wait! (please don't flame me on what I say, they are indeed coming out in mid-2005, I have sources, and I'm not a little 10 year old who posts stupid stuff )
Are you SEROUSLY suggesting that AMD is incapable of making faster processors? You do know that neither AMD nor Intel immediately ship the fastest processor they're capable of making don't you? Why should they if the competition isn't making anything faster? Why not milk all you can from the 130nm 2.6 GHz processors that you can while people will still by them since there's no cooler running 90nm 2.6 GHz processors AVAILABLE TO BUY.
Xbit Labs recently did a review of AMD's new 90nm processors and showed that they overclock fairly effortlessly to 2.6 GHz with only an 8% increase in voltage. At 2.4 GHz they're only generating about 50 watts of heat. I doubt you'll EVER see a single core 90nm Athlon-64 generating 100 watts of heat.
Are you aware that Intel's problem with the Prescott is NOT solely due to the 90nm manufacturing process? They made other architectural changes to the processor that increased it's power requirements. Also don't forget that Intel is not using SOI, a technology that reduces current leakage, and AMD is.
You think AMD is going to run into the same heat problems as Intel? You're forgetting one HUGE difference...
The Prescott, when compared to a Northwood at the exact same clock speed runs significantly hotter.
The Winchester, when compared to a Newcastle at the exact same clock speed runs slightly cooler... even with a 40% smaller die through which heat is dissipated.
So... while AMD may still be perfecting the 90nm process and working out the kinks, it has nothing to do with heat. They may indeed run into a clockspeed wall because of the physical limits of the silicon transistors as they did with the Athlon XP... but that's an entirely different matter... NOT the same problems Intel is having with the Prescott.
You miss the point, the problem with 90nm is that is has a breaking point, which wasn't there with previous die-shrinks. Both Intel and IBM/AMD acknowledge that it is there and when you reach that point power leakage occurs which significantly increases power drain and heat production with very little return in clockspeed. Traditionally a die-shrink meant a smaller die and less power drain at the same clockspeed, but the potential to scale upwards in clockspeed. With 90nm this only works up till a point, beyond that there's diminishing returns that increase power drain and heat production that cause for high clockspeeds to work but not be a viable solution due to the excess power leakage.
I recall THG having very similar problems with PIII Katmai at 600MHz and Coppermine at 1GHz both requiring much better cooling than their slower brothers. That is what happens if you push a core to speeds it can't really handle anymore - it doesn't seem to be 90nm specific.
But at 2.8GHz Prescott already produces more heat compared to Northwood. As frequency goes up, the difference actually seems to decrease! Which means the problems IBM has with the 970FX are entirely different from those of Intel with Prescott.
Actually, HardOCP mentioned how raising the voltage on a 90nm A64 didn't really seem to help overclocking. Which contradicts with your 970FX graph. And at 2.2GHz Winchester uses less power than Newcastle. Which means it's can't have the same problem as Prescott.
So, in conclusion:
1) Intel has a technical problem with Prescott.
2) IBM has a different technical problem with the 970FX.
3) AMD may or may not have a technical problem with raising Winchester speeds, but certainly not the same one as either of the first two companies.
And so far there is very little reason to assume AMD has a technical problem with 90nm at all.