A PS5 with 36 CU with a 2GHz boost clock will perform depending on game in between the performance of a 5700 and XT and in some cases higher than a STOCK XT... Won't even touch a overclocked XT like a 2GHz Taichi.
Well that is if it has 36CU,what is been say here is that the PS5 running with 36CU was on BC mode,just like the PS4 Pro lock 18 CU while running PS4 games.
So reality is that it could have more CU.
This is also based on a GitHub leak,which showed a GPU without Ray tracing,which again is impossible considering it was confirmed by Sony it self that the PS5 has hardware acceleration ray tracing.
Several leaks point at higher performance than the 5700XT but it is up to see,and I don't think it will be my much in any case.
The simple fact of the matter though is it will be bang for buck incredible deal.
You have developers saying there isn't much of a different from a XSX and PS5 and then you have a CEO saying a RTX 2080 Q is faster than those consoles... Then you have the xbox saying 2x the GPU power of a X1X for the new console which puts it a 5700 XT overclocked performance.
We won't know till its released and tested just how much of a difference the consoles are in real world performance to each other and what PC they compare to.
I don't really care, I just like speculating while I sit here processing images and sipping on some coffee.
A PS5 with 36 CU with a 2GHz boost clock will perform depending on game in between the performance of a 5700 and XT and in some cases higher than a STOCK XT... Won't even touch a overclocked XT like a 2GHz Taichi.
1. PS4 Pro's GPU has 40 CU instead of PC Polaris 10's 36 CU design
The semi-custom job can tolerate extra CUs into existing GCN's shader engine framework. That's 10 CU per GCN Shader Engine which existed since Pitcairn GCN.
Recent leaks has 40 CU for PS5.
2. RDNA Shader Engine unit was shown to scale up to 24 CU with NAVI 14.
Sony's PS5 can target 36 CU or 40 CU and up to 48 CU with ease and in accordance with their financials.
2GHz Taichi is with 1st gen 7nm NAVI. It would be strange for zero electrical leakage mitigation improvements with extra R&D time.
For PS5 and XSX, both Sony and MS are following X1X's extra R&D time plan instead of following PS4 Pro/RX-480/RX-5700 XT's release time frame.
-----
My own speculation
PS5 GPU has 44 CU active from 48 CU at around 1700 Mhz clock speed. Two DCU disabled. My reason: dual NAVI 14 framework.
XSX GPU has 56 CU active from 60 CU at around 1700 Mhz clock speed. Two DCU disabled. 56 CU is problematic for three RDNA Shader Engine setup due to DCU(Dual CU) rules are enforced for RDNA's CU design.
Personally, I don't believe in 2Ghz PS5 GPU, but I'm reposting other people's leaks and speculation for discussion material.
The simple fact of the matter though is it will be bang for buck incredible deal.
You have developers saying there isn't much of a different from a XSX and PS5 and then you have a CEO saying a RTX 2080 Q is faster than those consoles... Then you have the xbox saying 2x the GPU power of a X1X for the new console which puts it a 5700 XT overclocked performance.
We won't know till its released and tested just how much of a difference the consoles are in real world performance to each other and what PC they compare to.
I don't really care, I just like speculating while I sit here processing images and sipping on some coffee.
The idea for "xbox saying 2x the GPU power of a X1X for the new console which puts it a 5700 XT overclocked performance" is flawed since X1X GPU is superior Hawaii's results which is superior to RX 480/580's results.
Sounds about right. The PS5 and XSX will be upper mid-range by PC standards. They wouldn't be able to match high-end PC standards due to price. Consoles always need to strike the right balance between price and power.
I'd be more interested in knowing the MOS transistor counts. That's usually a better indicator of raw power than the FLOPS.
The idea for "xbox saying 2x the GPU power of a X1X for the new console which puts it a 5700 XT overclocked performance" is flawed since X1X GPU is superior Hawaii's results which is superior to RX 480/580's results.
Both 5700 and 5700 XT has 448 GB/s memory bandwidth and RX-5700 XT's TFLOPS scale didn't scale memory bandwidth increase.
PC's Polaris 10 GCN has inferior IPC raster results when compared to Hawaii GCN.
Microsoft would be basing its claims on something like Forza Motosport 7 wet tracks which X1X beats RX-580.
I have show you many times that the xbox one X loss to the 580 in several games,showing Forza Horizon means nothing is a game cooked for consoles and hold back on PC and benchmarks prove it.
The GPU is Polaris with some modifications.
The RX5700XT has better increased performance over Vega and Polaris,so yeah 2X power could very well be achieve with a OC 5700XT,MS has not at any point confirm 12TF it has not,all come from sources because they them self have not state it,and DF asked them to no reply.
So people assume is 12TF which is what you do as well when it serve you best,because when it was the PS5 you started putting doubts about HBRT,but when it comes to MS you happily claim 12TF.
Nvidia GPU can beat AMD GPU with less bandwdith what make you think that bandwidth usage on Navi is not better than Polaris on a vase level?
The idea for "xbox saying 2x the GPU power of a X1X for the new console which puts it a 5700 XT overclocked performance" is flawed since X1X GPU is superior Hawaii's results which is superior to RX 480/580's results.
Both 5700 and 5700 XT has 448 GB/s memory bandwidth and RX-5700 XT's TFLOPS scale didn't scale memory bandwidth increase.
PC's Polaris 10 GCN has inferior IPC raster results when compared to Hawaii GCN.
Microsoft would be basing its claims on something like Forza Motosport 7 wet tracks which X1X beats RX-580.
I have show you many times that the xbox one X loss to the 580 in several games,showing Forza Horizon means nothing is a game cooked for consoles and hold back on PC and benchmarks prove it.
The GPU is Polaris with some modifications.
The RX5700XT has better increased performance over Vega and Polaris,so yeah 2X power could very well be achieve with a OC 5700XT,MS has not at any point confirm 12TF it has not,all come from sources because they them self have not state it,and DF asked them to no reply.
So people assume is 12TF which is what you do as well when it serve you best,because when it was the PS5 you started putting doubts about HBRT,but when it comes to MS you happily claim 12TF.
Nvidia GPU can beat AMD GPU with less bandwdith what make you think that bandwidth usage on Navi is not better than Polaris on a vase level?
1. Prove it at 4K resolution. MS made its claims for X1X based on 1st party titles.
2. Hawaii GCN raster IPC beats Polaris GCN and Vega GCN raster IPC
Scale R9-390 5.1 TFLOPS raster IPC results into 9.75 TFLOPS, it will land on RX-5700 XT range between RX Vega 64 and GTX 1080 Ti.
Against Vega 64 GCN, adding CUs with Hawaii's quad shader engines framework beyond Hawaii's 40/44 CU = diminishing return.
RX 5700 XT's 40 CU 8.9 TFLOPS TFLOPS increase comes with 1755 Mhz clock (boost game clocks) speed increase instead. NAVI 10 includes fixes for texture width I/O, back face culling and shader branching (e.g. Crytek's software raytracing).
Against Polaris GCN, RX-480 is memory bandwidth gimped.
VEGA20 (GFX906) (VEGA with Deep Learning extensions)
"CLRX version" is a version of CLRX, an assembler for running compute code on AMD cards. It says "GCN 1.5.0" because Navi uses largely the same instruction encodings as GCN, despite being a new architecture, the same way Bulldozer, Zen and Core all use the x86 instruction set.
Even if it is the equivalent to a 2070 super, wouldn't it still out perform the 2070 super because the games are coded different?
Not really that's a thing of the past. GPU's on PC and Consoles are about the same now performance wise.
Also 4k isn't much of a thing on PC so with PC focusing on 1080p or 1440p which are the resolutions PC functions at u need a whole lot less GPU performance then consoles to get comparable performance.
Here example:
1080p:
2070 super = 129 fps
old as hell 470 radeon = 62 fps
4k:
2070 super = 61 fps
Which basically equals 470 on 1080p.
8k vs 1080p
1080p:
2080ti = 189 fps.
950 gtx = 26 fps
4k:
2080 ti: 26 fps.
950 gtx: 28 fps at 1080p ( 950 gtx is basically comparable towards Gpu inside the PS4 ).
Now imagine a weaker then 2080ti GPU which the 2070 super is ~30%.
Back to 4k.
Now imagine raytracing added on that console GPU which will drop performance even harder, PC's will easily steamrol those consoles.
If you play at 1080p and have a 2016 mid gen GPU with 8gb of v-ram u will be fine most likely for the entire generation unless 4k isn't getting targeted on that console, but even if they drop the resolution towards 1440p and won't use raytracing the GPU needed in PC will be comparable towards a vega 56 or 1080 / 2060 ( probably 3050 gtx ).
That chart is at 8K where the 2080 Ti scores 26fps.
Yea its a typo, it should be 8k. its 8k vs 1080p comparison.
Log in to comment