[QUOTE="Wickerman777"]So, here are couple of points about some of the individual parts for people to consider:18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU This is misleading. 50% more CUs does not = 50% more performance because there are other factors to take into account. It has nothing to do with the CPU provided the CPU is quick enough which they should be. Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall. This is BS. It is a 6% clock speed bump, of course each CU is running 6% faster but so is the whole GPU. The way he has written this is suggest that the 6% increase is cumulative for each CU which is bogus and very misleading. We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted. There will be very few cases where you can steam data from both the DDR3 and the ESRAM. The ESRAM will not be in use at all times because the DDR3 feeding it is much slower, it will help with the bandwidth and I am sure it can peak as high as the PS4 but on average sustained throughput the PS4 will come out ahead, I just do not know by how much. We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles. The PS4 CPU has not had its CPU clockspeed revealed as far as I am aware, 1.6Ghz does seem the most likely though. The PS4 also has audio chips, just no on the APU. Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU. This is misdirection. Yes it will help the CPU read GPU generated data, but the whole point of GPGPU is that the GPU does the processing and in this regard the X1 is behind the PS4. It also does not answer the question of weather the CPU can read/write directly to the GPU cache. I do not disagree that the X1 is well balanced, it is just that the PS4 is also well balanced at a higher tier of performance, the X1 is about as good as they could have made it with their initial design goals, available silicon and power budget so it is not a shit box by any means. The issue is that the PS4 had different design goals which meant they did not have to sacrifice APU space to fit in the ESRAM which enabled them to have a more powerful GPU.
18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.
Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.
I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.
Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.
Thanks again for letting my participate. Hope this gives people more background on my claims.
http://www.neogaf.com/forum/showpost.php?p=80951633&postcount=195Â
What especially strikes me as absurd about that is his first claim about more GPU cores. The way he puts it one could be the left with the impression that less graphics cores > more graphics cores. Ughh, what?!!! If that's the case I guess if you're building a gaming PC you'd be better off with a Radeon 7770 than you would be a Radeon 7970 since a 7770 has just 10 cores to the 7970's 32, lol.
btk2k2
You should add that the genius Cerny added tons of ACEs and modified the architecture to support fine-grain computing so that the GPU can utilize GPGPU computing without taking a hit to the graphics of the games.
The Xbox ONE, on the other hand, uses off-the-shelf GPU, so developers wanting to use GPGPU computing will have to sacrifice graphical fidelity.
Cerny is a genius. Who does Microsoft have that's on the same level?
Log in to comment