Well shit.
Nvm Yayas, I take it all back.
That is false and was clearly stated not to be true in Mark Cerny's own words.
Excuse me it will run at 2.23ghz most of the time.
However, again, while 2.23GHz is the limit and also the typical speed
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision
The only times the GPU will not run at that speed if when the CPU require more power,in other words in CPU bound scenarios were the CPU would need extra power,other wise that will be the speed.
The PS5 is a 9.2 TF console boosted to 10.3 TF's.
The 7nm process can only sustain speeds around 1.8 Ghz and 7nmE Process can sustain speeds up to 2 Ghz not above 2 Ghz. MS already told us XSX is 7nmEnhanced.
Unless the PS5 is built on the expensive and fairly new 7nm+ process those 2.23Ghz claims are not realistic.
I highly doubt the PS5 was designed around a different process than XSX. More likely both are probably built on the 7nmE process with 2 Ghz sustained clocks being the safe clock speeds for those GPU's.
GitHub was correct with their information. 9.2 TF's is likely the sustained output of PS5's GPU with it able to boost to 10.3 TF's when power allows.
NO.
RDNA2 please stop.
However, again, while 2.23GHz is the limit and also the typical speed
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision
Again the speed will be 2.23ghz only when the PS5 CPU require more power the GPU will drop in worse case 10% power,which mean an even smaller % frequency drop.
Guthub never stated that Guthub showed what was an early test of the chip,which by the way more than 1 year ago was already running at 2.0ghz.
Second you can't just take a GPU that already is running at 2ghz and rise it 200+mhz more without problems,hell people here claimed the PS5 wasn't RDNA2,when it is clear it is since RDNA 1 would never everrrr hold those clock sustain period,hell RDNA1 was even giving problems with thermals on small OC over stock.
I ask you why do you fanboys get from this? Why invent crap to try to make the xbox gap bigger than it is,you people did the same shit this gen but on reverse trying to downplay a 40% gap,now you want to make a 17% one seem bigger and you are not the only one,and oddly using the same style claiming false crap that already has been disproven by sony own words and worse which even DF state in its article.
The gapo will be what it is period,it will not grow or get smaller by arguments.
Hell there is a bigger disparity in the SSD than on power.
@EG101: No, it's not. But keep saying that to further your little narrative.
Either he has 10 accounts or lemmings have join forces to make the same false claims to see if it stick.😂
Then they claim i am damage controlling,but some how what they are doing is not inventing crap.
@EG101: the 5700xt has been overclocked to 2.2ghz without issues. There will be no problem to run the PS5 GPU at those speeds consistently if needed. Or else Cerny wouldn't have told devs that. But keeping the power consumption down if games don't utilize all the power is important, especially these days where the environment is so important. There is no point for the X1X to use 300watt running Netflix.
The small differences between these consoles isn't much to argue over. The X1 ran most games at 720 to 900p at 25-30fps, no Xbox fans complained then. Both these next gen consoles are impressive, but Sony looks like to have the best combination of power, innovation, and cost. People who really misses some pixels will have a gaming PC anyways.
Thanks that's good to hear. PS5 should be able to sustain 2.23 Ghz without any issues.
When there are no current GPU'S selling at that clock speed its difficult to believe PR speak that 2.23 would've been sustainable.
No idea what a yaya is but the man goes above and beyond the call of duty.
remember that one CINEMATIC Half-Life 2 overhaul mod that makes everything and everyone look awful and Alyx has like makeup and collagen injected into every pore of her body?
— HDoomguy (@HDoomguy) January 12, 2018
Apparently the guy who did it modeled her vagina under her pants all the way up to her fucking cervix pic.twitter.com/5jOpp7XCfy
That is false and was clearly stated not to be true in Mark Cerny's own words.
Excuse me it will run at 2.23ghz most of the time.
However, again, while 2.23GHz is the limit and also the typical speed
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision
The only times the GPU will not run at that speed if when the CPU require more power,in other words in CPU bound scenarios were the CPU would need extra power,other wise that will be the speed.
The PS5 is a 9.2 TF console boosted to 10.3 TF's.
The 7nm process can only sustain speeds around 1.8 Ghz and 7nmE Process can sustain speeds up to 2 Ghz not above 2 Ghz. MS already told us XSX is 7nmEnhanced.
Unless the PS5 is built on the expensive and fairly new 7nm+ process those 2.23Ghz claims are not realistic.
I highly doubt the PS5 was designed around a different process than XSX. More likely both are probably built on the 7nmE process with 2 Ghz sustained clocks being the safe clock speeds for those GPU's.
GitHub was correct with their information. 9.2 TF's is likely the sustained output of PS5's GPU with it able to boost to 10.3 TF's when power allows.
NO.
RDNA2 please stop.
However, again, while 2.23GHz is the limit and also the typical speed
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision
Again the speed will be 2.23ghz only when the PS5 CPU require more power the GPU will drop in worse case 10% power,which mean an even smaller % frequency drop.
Guthub never stated that Guthub showed what was an early test of the chip,which by the way more than 1 year ago was already running at 2.0ghz.
Second you can't just take a GPU that already is running at 2ghz and rise it 200+mhz more without problems,hell people here claimed the PS5 wasn't RDNA2,when it is clear it is since RDNA 1 would never everrrr hold those clock sustain period,hell RDNA1 was even giving problems with thermals on small OC over stock.
I ask you why do you fanboys get from this? Why invent crap to try to make the xbox gap bigger than it is,you people did the same shit this gen but on reverse trying to downplay a 40% gap,now you want to make a 17% one seem bigger and you are not the only one,and oddly using the same style claiming false crap that already has been disproven by sony own words and worse which even DF state in its article.
The gapo will be what it is period,it will not grow or get smaller by arguments.
Hell there is a bigger disparity in the SSD than on power.
@EG101: No, it's not. But keep saying that to further your little narrative.
Either he has 10 accounts or lemmings have join forces to make the same false claims to see if it stick.😂
Then they claim i am damage controlling,but some how what they are doing is not inventing crap.
What a silly thing to say.
Anyway PS5 will be the first mass market device that will have a 36 CU AMD GPU clocked so highly. Hope it works out well.
@EG101: No, it's not. But keep saying that to further your little narrative.
Name 1 other stock AMD GPU running above 2Ghz right now on a 7nmE process.
If you can I'll admit I'm wrong
Name one RDNA2 GPU bigger than the series X,oh and both have Ray traycing again no AMD varian on PC exist with this specs.
Just because something is new and not on PC doesn't mean is not real,both the PS5 and xbox have GPU with no equivalent from AMD on PC yet.
Basically what you are doing is damage controlling the fact that the PS5 is not far,the xbox been stronger doesn't fill you unless it is by allot..😂
@Martin_G_N:
"The differences between the X1 and PS4 made me think we'd get almost different games, but basically it's just resolution on the X1 that's lower."
- In the end, that's all we're really going to see between the ps5 and XboxSX. I dunno what all the hubbub is about. And in the end, ps5 will still have the better exclusives.
@EG101: No, it's not. But keep saying that to further your little narrative.
Name 1 other stock AMD GPU running above 2Ghz right now on a 7nmE process.
If you can I'll admit I'm wrong
Name one RDNA2 GPU bigger than the series X,oh and both have Ray traycing again no AMD varian on PC exist with this specs.
Just because something is new and not on PC doesn't mean is not real,both the PS5 and xbox have GPU with no equivalent from AMD on PC yet.
Basically what you are doing is damage controlling the fact that the PS5 is not far,the xbox been stronger doesn't fill you unless it is by allot..😂
We'll find out the truth when the DF comparisons make the rounds.
I still don't trust that high clock as a sustainable clock. Sounds like it will get very hot to me.
What a silly thing to say.
Anyway PS5 will be the first mass market device that will have a 36 CU AMD GPU clocked so highly. Hope it works out well.
Lisa Su already stated RX-5700/RX 5700 XT will be refreshed
Read https://www.tweaktown.com/news/70277/amd-promises-navi-refresh-next-gen-rdna2-graphics-cards-2020/index.html
Lisa Su said: "In 2019, we launched our new architecture in GPUs, it's the RDNA architecture, and that was the Navi based products. You should expect that those will be refreshed in 2020 - and we'll have a next generation RDNA architecture that will be part of our 2020 lineup".
She continued: "So we're pretty excited about that, and we'll talk more about that at our financial analyst day. On the data centre GPU side, you should also expect that we'll have some new products in the second half of this year"
RDNA 2 with DXR 1.1 was already demo'ed by AMD.
AMD's wasteful reflection raytracing everywhere demo.
Mobile 36 CU/40 CU RDNA 2 GPUs will be important for the laptop PC market.
We'll find out the truth when the DF comparisons make the rounds.
I still don't trust that high clock as a sustainable clock. Sounds like it will get very hot to me.
Probably but i am sure it will not tell you the speed of the GPU.
Well according to what is been say sony has a hell of good cooling system in place.
PS5 should still be a 4K/60 console. I'm not worried about the next gen consoles in terms of visuals and quality. The difference will be in performance. Will that extra bit of GPU power really make a noticeable difference in performance? What cutbacks will multiplatform devs really have to make because the PS5 has a slightly less capable GPU.
PS5 should still be a 4K/60 console. I'm not worried about the next gen consoles in terms of visuals and quality. The difference will be in performance. Will that extra bit of GPU power really make a noticeable difference in performance? What cutbacks will multiplatform devs really have to make because the PS5 has a slightly less capable GPU.
Imo, the differences between the 2 will likely just be in the Ray Tracing implementation.
XSX will be able to be a bit more aggressive there. Games should run the same.
PS5 should still be a 4K/60 console. I'm not worried about the next gen consoles in terms of visuals and quality. The difference will be in performance. Will that extra bit of GPU power really make a noticeable difference in performance? What cutbacks will multiplatform devs really have to make because the PS5 has a slightly less capable GPU.
Imo, the differences between the 2 will likely just be in the Ray Tracing implementation.
XSX will be able to be a bit more aggressive there. Games should run the same.
That's one thing I missed. What's the difference between the Series X and the PS5 in terms of ray tracing capabilities?
PS5 should still be a 4K/60 console. I'm not worried about the next gen consoles in terms of visuals and quality. The difference will be in performance. Will that extra bit of GPU power really make a noticeable difference in performance? What cutbacks will multiplatform devs really have to make because the PS5 has a slightly less capable GPU.
Imo, the differences between the 2 will likely just be in the Ray Tracing implementation.
XSX will be able to be a bit more aggressive there. Games should run the same.
That's one thing I missed. What's the difference between the Series X and the PS5 in terms of ray tracing capabilities?
The rumor is in RDNA2's architecture the more CU's you have the better Ray Tracing can be Implemented. Clock speed doesn't necessarily scale linearly with Ray Tracing but CU count does.
Not sure how accurate that is but that's the rumor from the same guy that released the GitHub specs for both consoles.
Edit: Also Ray Tracing is a resource hog so the extra TF's will help with running your game while implementing RT.
The rumor is in RDNA2's architecture the more CU's you have the better Ray Tracing can be Implemented. Clock speed doesn't necessarily scale linearly with Ray Tracing but CU count does.
Not sure how accurate that is but that's the rumor from the same guy that released the GitHub specs for both consoles.
That sounds like something. ;)
@EG101:
The github leak refers to both consoles being Zen-1.5 and RDNA1 (older dev-kits? or just plain guessing) I wouldn't place too much faith in the predictions, with current info of both being a baseline of Zen2,RDNA2,GDDR6.
That is false and was clearly stated not to be true in Mark Cerny's own words.
Excuse me it will run at 2.23ghz most of the time.
However, again, while 2.23GHz is the limit and also the typical speed
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision
The only times the GPU will not run at that speed if when the CPU require more power,in other words in CPU bound scenarios were the CPU would need extra power,other wise that will be the speed.
The PS5 is a 9.2 TF console boosted to 10.3 TF's.
The 7nm process can only sustain speeds around 1.8 Ghz and 7nmE Process can sustain speeds up to 2 Ghz not above 2 Ghz. MS already told us XSX is 7nmEnhanced.
Unless the PS5 is built on the expensive and fairly new 7nm+ process those 2.23Ghz claims are not realistic.
I highly doubt the PS5 was designed around a different process than XSX. More likely both are probably built on the 7nmE process with 2 Ghz sustained clocks being the safe clock speeds for those GPU's.
GitHub was correct with their information. 9.2 TF's is likely the sustained output of PS5's GPU with it able to boost to 10.3 TF's when power allows.
NO.
RDNA2 please stop.
However, again, while 2.23GHz is the limit and also the typical speed
https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision
Again the speed will be 2.23ghz only when the PS5 CPU require more power the GPU will drop in worse case 10% power,which mean an even smaller % frequency drop.
Guthub never stated that Guthub showed what was an early test of the chip,which by the way more than 1 year ago was already running at 2.0ghz.
Second you can't just take a GPU that already is running at 2ghz and rise it 200+mhz more without problems,hell people here claimed the PS5 wasn't RDNA2,when it is clear it is since RDNA 1 would never everrrr hold those clock sustain period,hell RDNA1 was even giving problems with thermals on small OC over stock.
I ask you why do you fanboys get from this? Why invent crap to try to make the xbox gap bigger than it is,you people did the same shit this gen but on reverse trying to downplay a 40% gap,now you want to make a 17% one seem bigger and you are not the only one,and oddly using the same style claiming false crap that already has been disproven by sony own words and worse which even DF state in its article.
The gapo will be what it is period,it will not grow or get smaller by arguments.
Hell there is a bigger disparity in the SSD than on power.
@EG101: No, it's not. But keep saying that to further your little narrative.
Either he has 10 accounts or lemmings have join forces to make the same false claims to see if it stick.😂
Then they claim i am damage controlling,but some how what they are doing is not inventing crap.
Claim: The memory bandwidth gap between XSX and PS5 increases with higher CPU usage
XSX CPU has ~937 GFLOPS at 3.66 Ghz with SMT or ~973 GFLOPS at 3.8 Ghz with no-SMT
PS5 CPU has ~896 GFLOPS at 3.5 Ghz (variable) with SMT
GFLOPS in FP32.
Notes: Intel AXV 2 has GPU like gather instructions. Each Zen 2 has dual 256-bit AVX 2 FMA3 instruction capability (IBM PowerPC can get lost). AXV 2 can handle FP16/INT16 packing into its 256-bit AVX 2 registers e.g. https://stackoverflow.com/questions/39413328/load-16-bit-integers-in-avx2-vector
Scenario 1: Near GPU only memory access with maximum CPU-to-GPU fusion link usage and maximum respect CPU cache boundaries programming tricks i.e. X86 CPU can be programmed like a CELL SPE. It's in Intel CPU optimization guide since X86 CPUs gained L2 cache.
For example, Swiftshader 3.0 Direct3D9c software JIT render requires proper CPU cache size config setting or the software 3D rendering performance will suffer while CELL SPEs will trigger an exception error. X86 CPU is more forgiving to the programmer i.e. the programmer can decide if the performance penalty is acceptable.
Manual tiled cache compute programming is not new for high-performance X86 server apps. L3 cache is usually large with X86 server and workstation CPUs.
XSX GPU has 25% memory bandwidth advantage over PS5 GPU.
Scenario 2: Zen 2 CPU consumes 40 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR3-2600
XSX GPU: 520 GB/s
XSX CPU: 40 GB/s
VS
PS5 GPU: 408 GB/s
PS5 CPU: 40 GB/s
XSX GPU has 27.45% memory bandwidth advantage over PS5 GPU.
Scenario 3: Zen 2 CPU consumes 60 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR4-3800
XSX GPU: 500 GB/s
XSX CPU: 60 GB/s
VS
PS5 GPU: 388 GB/s
PS5 CPU: 60 GB/s
XSX GPU has 28.86% memory bandwidth advantage over PS5 GPU.
Scenario 4: Zen 2 CPU consumes 80 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR4-5000 e.g. Corsair’s Vengeance LPX DDR4 5,000 MHz kit (pair of 8GB modules)
XSX GPU: 480 GB/s
XSX CPU: 80 GB/s
VS
PS5 GPU: 368 GB/s
PS5 CPU: 80 GB/s
XSX GPU has 30.4% memory bandwidth advantage over PS5 GPU.
Only XSX can brute force like gaming PC with similar CPU and GPU specs.
CPU game world simulation can operate independently from loading textures aka Tile Resource. LOL
The rumor is in RDNA2's architecture the more CU's you have the better Ray Tracing can be Implemented. Clock speed doesn't necessarily scale linearly with Ray Tracing but CU count does.
Not sure how accurate that is but that's the rumor from the same guy that released the GitHub specs for both consoles.
That sounds like something. ;)
Well, it's the same with Turing i.e. more SM = more RT cores, more cache.
OC TU106... it wouldn't beat TU104 or TU102.
AMD should have learned from Vega BS TFLOPS scaling by RDNA 2's release because NVIDIA is rumored to increase GPC(raster engine and geometry input) from six to seven and eight scales with RTX Ampere.
I'm looking forward to RTX 3080 Ti with seven GPCs, +40 percent higher raster (16 percent for seven GPCs + 33 percent clock speed increase e.g. 2.39 Ghz) and double DXR performance.
All i know is whichever next gen console runs cyberpunk 2077 the best will be getting my money day one
@gifford38: Incorrect. Microsoft will be handling all of Sony’s online functions. Such as PSN and multiplayer.
XCloud is 100% exclusive to Xbox and allows Xbox users to stream all there content on there mobile devices.
@FinalFighters: Xbox Series X will run all games better than any other console.
Well it depends on what you consider as better,faster frames,resolution or effects sure,faster loading less pop in and faster system probably not.
Claim: The memory bandwidth gap between XSX and PS5 increases with higher CPU usage
XSX CPU has ~937 GFLOPS at 3.66 Ghz with SMT or ~973 GFLOPS at 3.8 Ghz with no-SMT
PS5 CPU has ~896 GFLOPS at 3.5 Ghz (variable) with SMT
GFLOPS in FP32.
Notes: Intel AXV 2 has GPU like gather instructions. Each Zen 2 has dual 256-bit AVX 2 FMA3 instruction capability (IBM PowerPC can get lost). AXV 2 can handle FP16/INT16 packing into its 256-bit AVX 2 registers e.g. https://stackoverflow.com/questions/39413328/load-16-bit-integers-in-avx2-vector
Scenario 1: Near GPU only memory access with maximum CPU-to-GPU fusion link usage and maximum respect CPU cache boundaries programming tricks i.e. X86 CPU can be programmed like a CELL SPE. It's in Intel CPU optimization guide since X86 CPUs gained L2 cache.
For example, Swiftshader 3.0 Direct3D9c software JIT render requires proper CPU cache size config setting or the software 3D rendering performance will suffer while CELL SPEs will trigger an exception error. X86 CPU is more forgiving to the programmer i.e. the programmer can decide if the performance penalty is acceptable.
Manual tiled cache compute programming is not new for high-performance X86 server apps. L3 cache is usually large with X86 server and workstation CPUs.
XSX GPU has 25% memory bandwidth advantage over PS5 GPU.
Scenario 2: Zen 2 CPU consumes 40 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR3-2600
XSX GPU: 520 GB/s
XSX CPU: 40 GB/s
VS
PS5 GPU: 408 GB/s
PS5 CPU: 40 GB/s
XSX GPU has 27.45% memory bandwidth advantage over PS5 GPU.
Scenario 3: Zen 2 CPU consumes 60 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR4-3800
XSX GPU: 500 GB/s
XSX CPU: 60 GB/s
VS
PS5 GPU: 388 GB/s
PS5 CPU: 60 GB/s
XSX GPU has 28.86% memory bandwidth advantage over PS5 GPU.
Scenario 4: Zen 2 CPU consumes 80 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR4-5000 e.g. Corsair’s Vengeance LPX DDR4 5,000 MHz kit (pair of 8GB modules)
XSX GPU: 480 GB/s
XSX CPU: 80 GB/s
VS
PS5 GPU: 368 GB/s
PS5 CPU: 80 GB/s
XSX GPU has 30.4% memory bandwidth advantage over PS5 GPU.
Only XSX can brute force like gaming PC with similar CPU and GPU specs.
CPU game world simulation can operate independently from loading textures aka Tile Resource. LOL
There is some things missing here.
1-Where is the 3.5GB of slower memory than the PS5 illustrated there.?
2-Did you notice the trend of those memory calculations you make?
In all the xbox show a gap much wider than the actual power it has over the PS5 which basically is 17%.
So basically is over kill,having more memory than power will not increase your power,the only way power increase with more bandwidth is if you are bandwdith bound which doesn't seem to be the case,so that 17% power gap will not suddenly transform into a 30% gap just because the xbox has more bandwidth.
If a game uses 13.5GB of video memory the xbox will have mixed results because its bandwidth is not completely as fast all around.
@kazhirai: The problem is you pulled the worse case scenario out of your ass. Multiple sources have said the GPU will never drop below 10tf. Multiple devs have said the gap will be smaller then the numbers on paper and you have Xbox fanboys saying it will somehow be larger.
@gifford38: Incorrect. Microsoft will be handling all of Sony’s online functions. Such as PSN and multiplayer.
XCloud is 100% exclusive to Xbox and allows Xbox users to stream all there content on there mobile devices.
MS is not handling per say,Sony pay to use MS cloud which is not the same,sony will be like any other costumer.
@BlackShirt20: there was an article saying Sony is working with Microsoft to get xcloud. Sony will have its own gamepass.
Sony already has its own gamepass,psn now,gamepass is a copy of ps now and games with gold is a copy of PSN+.
@bluestars: Hellblade looks the same on Pro and X. Get it out of your head that a few extra pixels make a game look better lol. Lmao at Hellblade looking better than GOW. Xbox has no identity, all you guys care about is better multiplats because you KNOW the exclusives suck.
@kazhirai: The problem is you pulled the worse case scenario out of your ass. Multiple sources have said the GPU will never drop below 10tf. Multiple devs have said the gap will be smaller then the numbers on paper and you have Xbox fanboys saying it will somehow be larger.
Who are these "sources" and what full next gen games have they tested this on?
@FinalFighters: Xbox Series X will run all games better than any other console.
Well it depends on what you consider as better,faster frames,resolution or effects sure,faster loading less pop in and faster system probably not.
Claim: The memory bandwidth gap between XSX and PS5 increases with higher CPU usage
XSX CPU has ~937 GFLOPS at 3.66 Ghz with SMT or ~973 GFLOPS at 3.8 Ghz with no-SMT
PS5 CPU has ~896 GFLOPS at 3.5 Ghz (variable) with SMT
GFLOPS in FP32.
Notes: Intel AXV 2 has GPU like gather instructions. Each Zen 2 has dual 256-bit AVX 2 FMA3 instruction capability (IBM PowerPC can get lost). AXV 2 can handle FP16/INT16 packing into its 256-bit AVX 2 registers e.g. https://stackoverflow.com/questions/39413328/load-16-bit-integers-in-avx2-vector
Scenario 1: Near GPU only memory access with maximum CPU-to-GPU fusion link usage and maximum respect CPU cache boundaries programming tricks i.e. X86 CPU can be programmed like a CELL SPE. It's in Intel CPU optimization guide since X86 CPUs gained L2 cache.
For example, Swiftshader 3.0 Direct3D9c software JIT render requires proper CPU cache size config setting or the software 3D rendering performance will suffer while CELL SPEs will trigger an exception error. X86 CPU is more forgiving to the programmer i.e. the programmer can decide if the performance penalty is acceptable.
Manual tiled cache compute programming is not new for high-performance X86 server apps. L3 cache is usually large with X86 server and workstation CPUs.
XSX GPU has 25% memory bandwidth advantage over PS5 GPU.
Scenario 2: Zen 2 CPU consumes 40 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR3-2600
XSX GPU: 520 GB/s
XSX CPU: 40 GB/s
VS
PS5 GPU: 408 GB/s
PS5 CPU: 40 GB/s
XSX GPU has 27.45% memory bandwidth advantage over PS5 GPU.
Scenario 3: Zen 2 CPU consumes 60 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR4-3800
XSX GPU: 500 GB/s
XSX CPU: 60 GB/s
VS
PS5 GPU: 388 GB/s
PS5 CPU: 60 GB/s
XSX GPU has 28.86% memory bandwidth advantage over PS5 GPU.
Scenario 4: Zen 2 CPU consumes 80 GB/s memory bandwidth which exceeds PS4 CPU's 20 GB/s memory access IO
Equivalent to PC CPU with 128 bit DDR4-5000 e.g. Corsair’s Vengeance LPX DDR4 5,000 MHz kit (pair of 8GB modules)
XSX GPU: 480 GB/s
XSX CPU: 80 GB/s
VS
PS5 GPU: 368 GB/s
PS5 CPU: 80 GB/s
XSX GPU has 30.4% memory bandwidth advantage over PS5 GPU.
Only XSX can brute force like gaming PC with similar CPU and GPU specs.
CPU game world simulation can operate independently from loading textures aka Tile Resource. LOL
There is some things missing here.
1-Where is the 3.5GB of slower memory than the PS5 illustrated there.?
2-Did you notice the trend of those memory calculations you make?
In all the xbox show a gap much wider than the actual power it has over the PS5 which basically is 17%.
So basically is over kill,having more memory than power will not increase your power,the only way power increase with more bandwidth is if you are bandwdith bound which doesn't seem to be the case,so that 17% power gap will not suddenly transform into a 30% gap just because the xbox has more bandwidth.
If a game uses 13.5GB of video memory the xbox will have mixed results because its bandwidth is not completely as fast all around.
1. Game's 3.5 GB with 336 GB/s is treated like a non-GPU memory e.g. CPU and audio.
Based on PS4's Killzone ShadowFall, GPU dominates memory storage
This is why I cited PC's 128 bit DDR4-3800 with 60 GB/s example.
From PS4 example, it's effective asking for a gaming PC setup. The tiny CPU+GPU data share is like PC's CPU-to-GPU PCI-E link equivalent.
CPU does its assigned workload.
GPU does its assigned workload.
Small CPU-GPU shared workload.
Two weeks port raw Gears 5 benchmark port + PC ultra settings already showing XSX rivaling GTX 2080 class GPU, hence XSX GPU's 12 TFLOPS is scaling which is backed by memory bandwidth increase. XSX is acting as a gaming PC with similar CPU and GPU specs.
Also, XSX's memory layout leads to an easy last-minute change in the future e.g. 20 GB GDDR6 with ten 2GB chips.
2. TFLOPS calculations
XSX GPU, 2 * (52 x 64) * 1.825 Ghz = 12,147 GFLOPS or 12.147 TFLOPS at base clock speeds
PS5 GPU, 2 * (36 x 64) * 2.230 Ghz = 10,275 GFLOPS or 10.275 TFLOPS at variable clock speeds
PS5 GPU -2%, 2 * (36 x 64) * 2.1854‬ Ghz = 10,075 GFLOPS or 10.070 TFLOPS at variable clock speeds
12,147 / 10,275 = 1.18 or XSX GPU has 18% on top of PS5's GPU
12,147 / 10,075 = 1.2056 or XSX GPU has 20.56% on top of PS5's GPU
3. Each hardware advantage impacts render time which can be cumulative, but each advantage has dependency e.g. high memory bandwidth without a good TFLOPS painter is nearly pointless e.g. XBO which is CU bound.
@ronvalencia said:1. Game's 3.5 GB with 336 GB/s is treated like a non-GPU memory e.g. CPU and audio.
Based on PS4's Killzone ShadowFall, GPU dominates memory storage
This is why I cited PC's 128 bit DDR4-3800 with 60 GB/s example.
From PS4 example, it's effective asking for a gaming PC setup. The tiny CPU+GPU data share is like PC's CPU-to-GPU PCI-E link equivalent.
CPU does its assigned workload.
GPU does its assigned workload.
Small CPU-GPU shared workload.
Two weeks port raw Gears 5 benchmark port + PC ultra settings already showing XSX rivaling GTX 2080 class GPU, hence XSX GPU's 12 TFLOPS is scaling which is backed by memory bandwidth increase. XSX is acting as a gaming PC with similar CPU and GPU specs.
Also, XSX's memory layout leads to an easy last-minute change in the future e.g. 20 GB GDDR6 with ten 2GB chips.
2. TFLOPS calculations
XSX GPU, 2 * (52 x 64) * 1.825 Ghz = 12,147 GFLOPS or 12.147 TFLOPS at base clock speeds
PS5 GPU, 2 * (36 x 64) * 2.230 Ghz = 10,275 GFLOPS or 10.275 TFLOPS at variable clock speeds
PS5 GPU -2%, 2 * (36 x 64) * 2.1854‬ Ghz = 10,075 GFLOPS or 10.070 TFLOPS at variable clock speeds
12,147 / 10,275 = 1.18 or XSX GPU has 18% on top of PS5's GPU
12,147 / 10,075 = 1.2056 or XSX GPU has 20.56% on top of PS5's GPU
3. Each hardware advantage impacts render time which can be cumulative, but each advantage has dependency e.g. high memory bandwidth without a good TFLOPS painter is nearly pointless e.g. XBO which is CU bound.
Wait the xbox series X is not the PS4,you can't base how the xbox will use that memory based on Killzone MS and Sony use totally different aproach.
But if that memory will be threat as no GPU memory that means the xbox series X will be capped at 10GB of ram,which mean it can have less memory available to games than the PS5.
But from what i am seeing and reading in some places most tech sites compare it to the xbox one memory setup with 2 memory with 2 different speeds.
And some even claim that at those 3.5ghz the xbox series X will be slower.
@BlackShirt20 said:@tormentos: Gamepass is PSN done right. PSN is a complete joke.
No is a copy that is what it is.
Yeah PSN is such a joke that is basically printing money for sony,no one use it right because it sucks so much.
Is not 2002 man is 2020 PSN and live are basically the same shit where it counts and most games still are P2P.
@i_p_daily said:Who are these "sources" and what full next gen games have they tested this on?
Sony say the drop was small so unless you have a better source than Cerny it self you are damage controlling again.
@i_p_daily: Well first we have Cerny saying it would rarely drop below 10.3.
Then devs saying the gap is smaller then it appears.
http://www.pushsquare.com/news/2020/03/ps5_superior_to_xbox_series_x_in_a_lot_of_ways_but_devs_seem_disappointed_by_sonys_communication
Also if you listened to the conference it takes a 1% drop in clock to save 10% power. So dropping down to 9.2 tf would require almost a 10% drop in clock speed which is a 100% drop in power. Not happening.
People are still taking the github leak at 9.2tf which was on RDNA1 to guess that number. It was a test sample but not final silicon.
@tormentos: "Well it depends on what you consider as better,faster frames,resolution or effects sure,faster loading less pop in and faster system probably not."
The experience i have actually playing the game is more important than the time spent waiting for it to load. Especially since that 2-3 second advantage will only be on 1st party Sony games which mostly dont interest me.
Developers are saying the ps5 is the better machine.
https://wccftech.com/schreier-multiple-devs-are-telling-me-ps5-is-superior-to-xbox-series-x-in-several-ways-despite-spec/amp/
Developers are saying the ps5 is the better machine.
https://wccftech.com/schreier-multiple-devs-are-telling-me-ps5-is-superior-to-xbox-series-x-in-several-ways-despite-spec/amp/
Jason Schreier, whose sources are usually sound, clearly said he heard from several different developers that while both consoles are undoubtedly impressive, the Xbox Series X isn't actually that more powerful; in fact, some even said PS5 is superior in several ways. As such, Schreier commented during the podcast that Sony dropped the ball hard in terms of communication.
Same dude who claimed that his sources told him the PS5 would have a faster GPU than the 2080? This Jason?
Lol.
These insiders are a joke and are all taking you for a ride. They just care about boosting their followers count on Twitter.
@bluestars: Hellblade looks the same on Pro and X. Get it out of your head that a few extra pixels make a game look better lol. Lmao at Hellblade looking better than GOW. Xbox has no identity, all you guys care about is better multiplats because you KNOW the exclusives suck.
Death stranding is the best looking console game, nothing comes close. Don't listen to me, ask digital foundry.
@tormentos: "Well it depends on what you consider as better,faster frames,resolution or effects sure,faster loading less pop in and faster system probably not."
The experience i have actually playing the game is more important than the time spent waiting for it to load. Especially since that 2-3 second advantage will only be on 1st party Sony games which mostly dont interest me.
Yeah i guess you didn't touch the xbox one from 2013 to 2017.
Thats like saying the only graphical advantage the xbox will have in on exclusives,what make you think that developer just like the pushed the PS4,xbox one X they will not push the xbox series X and PS5 as well?
Pathetic.
Jason Schreier, whose sources are usually sound, clearly said he heard from several different developers that while both consoles are undoubtedly impressive, the Xbox Series X isn't actually that more powerful; in fact, some even said PS5 is superior in several ways. As such, Schreier commented during the podcast that Sony dropped the ball hard in terms of communication.
Same dude who claimed that his sources told him the PS5 would have a faster GPU than the 2080? This Jason?
Lol.
These insiders are a joke and are all taking you for a ride. They just care about boosting their followers count on Twitter.
Well is what he is saying is true that would mean it is faster than a 2080,the series X should be.
But i don't see that happening,people need to come to terms the xbox will be more powerful period,not by much but it will be more powerful,cows can't fall as low as lemmings did at the start of this gen.
fu** the secret sauce.
So its confirmed. The PS5 is the most powerful console for next gen.
No and people should not say that or quote people implying that,if that would happen i would eat crow,but the xbox will be stronger not by much but will be stronger period.
@Juub1990: Jason never said that. Another leak said that and yes the performance will be about on par with a 2080 so not really that far off.
That thing will be on the level of a 5700XT but a bit faster. Still far from a 2080 let alone stronger. He was dead wrong.
Yes he did in a podcast.
Source
Summary of transcript
Please Log In to post.
Log in to comment