To make matters worse, the development of the 45nm chip would have sucked up resources from the follow on 32nm variant and subsequent designs. There would have been a domino effect of delays and problems from a very questionable starting point.
So Dirk wisely pulled the plug, and instead of getting credit for doing the right thing, AMD got punished by the financial folk. It also avoided many more problems than most people could imagine. As we said earlier, the current Bulldozer is more or less all of those things that the reschedule was meant to avoid. How could this have happened if the parts now available are the second or third generation?
There is not one simple answer, there are a lot of little things wrong, and the death of Bulldozer is indeed by cuts, maybe more. Each one is not a big deal, but together, they take the chip from a really good idea to, well, aspirations of mediocrity. The first problem is simple, Bulldozer was designed and implemented in a different era.
What were the primary OSes used in late when the architecture was laid out? Storage and IO capabilities? Had you even heard of SAAS or web applications, much less used them on a daily basis? How much of your video was delivered on your PC? The software landscape was different, and the projections for the world of were too. The changes to computing over that time frame have been immense, and that is the backdrop for a lot of what went wrong, even if it is not to blame directly.
If you look at some of the details, a few things stand out. First, the cache latencies as measured by Michael Schuette in the Lost Circuits article , are in a word, horrific.
The situation is ugly enough that it may explain why so many executives left AMD over the past twelve months, and why the company was so tight-lipped about their departure. As a consequence, the results here will be lower than in a standard review, particularly for single-thread performance.
When AMD designed Bulldozer, it was aiming for a CPU that would be easier to ramp to higher frequencies while maintaining the same IPC instructions per clock cycle as its six-core predecessor.
Is FX good for gaming? Which AMD processor has 8 cores and runs at 4. When did the FX come out? Is the FX a true 6 core? Reactions: amd Oct 9, 5, 1, Feb 6, 2, 1, A stellar technical explanation.
Funny and sad at the same time. But i still wonder to this day why and how amd ended in such a technical mess? Reactions: coffeemonster , amd and scannall.
If bulldozer had of shipped with 32kb L1 that wasn't write through, 2. But probably not justified with a 32K L1 cache. I am sure that AMD must have simulated it. You present a lot of rumor as fact: 1. Andy glew left well before bulldozer that was released. There is nothing that needs a higher instruction window for SR, it was removing a decode bottleneck because the unit was round robin. Zen can decode upto 32bytes a cycle and yet according to agner rarely gets above 17, a uop cache would serve the exact same purpose and do it at lower power just like Zen,SB,Haswell,skylake etc.
The L1 arrays aren't even that big in BD, we are talking 1 or 2mm a module, 4. BD had all non memory functions on only 2 ALU's this creates unit contention like Mul and branch on the same port. I recall Keller described bd design as something like: "trying to make the ocean boil" I think perhaps with k7 meyers were in a sense that they did accomplish something like that big and therefore could do it again.
Forcing an impossible design structure top down. But i like to see this in a broader sense; Amd perhaps also have have an engineering culture thar promotes such designs. A kind of wish to do everything. It seems to me it can also be seen in their product portfolio especially for gpu that seems to me to be way to wide. Ironically Raja said to the investers meeting something like "i know a lot of you want us to cover more segments" Imo he is applying meaning to his own doubt if they are targeting to broad.
Reactions: coffeemonster. I did look at some benchmarks but they were all over the place showing the bottom line it was bad. Just curious how bad assuming same core count and same threads and same GHz speed which would put it against a Sandy Bridge E or Haswell E 8 corre with HT disabled purposely clocked at same speed.
It seemed so bad that they were even less than half speed and is why AMD had 8 core chips priced so low compared to Intel's 4 core chips because performance per core was beyond embarrassingly bad. Ryzen is a massive improvement, but still gets beat bad by Intel at the same clock speed evne in non-gaming related tasks at the same clock speed. Trolling is not allowed. Markfw Anandtech Moderator. Last edited by a moderator: May 20,
0コメント