Radeon VII reveal burries the hopes for trully High-end Next-gen consoles.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#251  Edited By ronvalencia
Member since 2008 • 29612 Posts

@gamecubepad said:

@ronvalencia:

I kinda learned my lesson about buying into the AMD hype train. Polaris 10 and FinFET 14nm were supposed to be this big jump, yet my old R9 390 PCS+ still puts the beatdown on my RX 480 8GB, with only 5.5TF power vs 6.2TF on the RX 480. The only "win" was dropping from ~285W on the 390 to ~164W on the RX 480.

This speculation is fun, but I'm saying based on history, PS5 will probably get a GPU around 120W and from AMD's $200-300 retail pricing tier. The Navi 10 "RX 3080" rumor above sounds about right with a 20-25% downclock, so like a 1070 OC/1070ti.

I didn't buy RX-480 due to rasterization hardware issue.

Before Raja "Mr TFLOPS" Koduri joined AMD some time in 2013, AMD was keeping up with rasterization power against NVIDIA's Kepler GPUS and I didn't like Kepler's register storage to CUDA core count ratio.

Hawaii GCN was later gimped since AMD didn't apply Polaris updates to it e.g. delta color compression, 2MB L2 cache increase from 1 MB, better triangle culling, better graphics command processor, higher clock speed. R9-390X OC at 1100 Mhz with 6.2 TFLOPS can almost rival reference R9-Fury Pro/X with higher TFLOPS is revealing.

One would assume R9-Fury X with higher 512 GB/s memory bandwidth vs R9-390X's 384 GB/s memory bandwidth to yield six raster engines and 96 ROPS configuration, but Raja "Mr TFLOPS" Koduri administration just slaps on 20 extra CUs with the same R9-390X's raster power, hence raster to TFLOPS efficiency was reduced on Fury.

Until AMD fixes rasterization hardware issue, I will continue to buy NVIDIA's desktop GPUs in the future. AMD should realize that PC gamers are buying "GPU" not DSP with weak raster hardware.

PS; I purchased 32 inch FreeSync/HDR/4K LG monitor for my GTX 1080 Ti and I'm happy with the FreeSync results.

VII's 60 CU with high 1800 Mhz clock speed is just a short term fix until AMD overcomes their quad raster engines and 64 ROPS design limitations.

Would Sony or MS accept another quad raster engine powered GPU in 7nm era GPUs?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#252  Edited By ronvalencia
Member since 2008 • 29612 Posts

@loco145 said:

@ronvalencia: First, yes. As your quote states, those operations in Turing independent from the Cuda cores which are now free! What GCN has is that a single compute unit can do both, floating point and integer, while with pascal and Maxwell, one type of operation would block the other on the whole SM. This is no longer an issue with Pascal and furthermore, the the full cuda cores are free of doing the tensor operations. What you are implying, that GCN can do integer operations for free (at the same time as a float operation on the same shader) is simply not true, but is the case with Pascal since is has significant silicon budget dedicated to it!

Your memory calculations fail to take into account that the bus is shared between the GPU and CPU in a SoC. Memory bandwidth wont be the strong point of the system, just like the PS4 had problems with AF, you can't hope to bail AMD architecture compared to grossly overclocking the memory on a card (yield issues anyone?) Also, you ignore that the GPU (at 7nm!) is already $700, so you are proposing to increase the number of rasters on the chip? So, you have an architecture that is bandwith memory starved, (the Vega 56 video you like to post so much) and you want to fix it by adding even more compute units that will require even more memory?

1. GCN CU's wave front MIMD container can mix floating point and integer data format without floating point instructions being blocked.

2. I seen this argument before, PC's CPU with PCI-e version 3 has two 16GB/s direction links into GPU's memory bus, hence some aspect on GPU's VRAM bandwidth is shared by PCI-e IO bandwidth from CPU.

PS4 CPU optimization guide tells programmer to stay within CPU's L2 cache boundary and such optimization guide has existed since K7 era. Cell's SPU's local memory optimization is not new i.e. the difference is no hardware error with x86's L2 cache over spill.

APU has direct links between CPU and GPU, hence generated data on L2 cache can minimize main memory transfer hit rates between CPU and GPU. Programmer has to tile their compute workloads within L2 cache boundary.

APUs wasn't optimized for larger CPU data-sets that exceeds L2 cache boundary which is not a major problem for game consoles. Don't expect PC's large scale RTS on game consoles.

Avatar image for loco145
loco145

12226

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#253  Edited By loco145
Member since 2006 • 12226 Posts

Operations aren't blocked, but still use compute resources. That's the whole point... And yes, the L2 cache you want to gimp compared to Radeon VII to make it cheaper and smaller. The argument is true BTW, and the overhead from PCI-E connection is much smaller than the shared bandwith on a SoC. Just look at the (relative) small penalty of using en eGPU and the fact that there's little reason to not force 16xAF on PC at driver level, while the PS4 has to optimize its usage.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#254 ronvalencia
Member since 2008 • 29612 Posts

@loco145 said:

Operations aren't blocked, but still use compute resources. That's the whole point... And yes, the L2 cache you want to gimp compared to Radeon VII to make it cheaper and smaller.

https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/

Turing introduces a new processor architecture, the Turing SM, that delivers a dramatic boost in shading efficiency, achieving 50% improvement in delivered performance per CUDA Core compared to the Pascal generation. These improvements are enabled by two key architectural changes. First, the Turing SM adds a new independent integer datapath that can execute instructions concurrently with the floating-point math datapath. In previous generations, executing these instructions would have blocked floating-point instructions from issuing.

Read "In previous generations, executing these instructions would have blocked floating-point instructions from issuing"

------

Vega 64's 4MB L2 cache for TMU and ROPS read/write units are fine since X1X's GPU already has 2MB L2 cache for TMU and 2MB render cache for ROPS.

Major improvement paths Turing GPU is double L2 cache storage e.g.

GTX 1080 Ti's 3MB L2 cache to RTX 2080 Ti's 6 MB L2 cache

GTX 1080's 2MB L2 cache to RTX 2080's 4 MB L2 cache

GTX 1070's 2MB L2 cache to RTX 2070's 4 MB L2 cache

GTX 1060's 1.5MB L2 cache to RTX 2060's 3 MB L2 cache

Vega 64's high bandwidth cache(acts like L3 cache, higher latency) below L2 cache should be removed since it's useless for games.

Avatar image for loco145
loco145

12226

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#255  Edited By loco145
Member since 2006 • 12226 Posts

On GCN, operations aren't blocked, but still use resources. That's the whole point, the extra silicon in Turing is not death, it gives a performance advantage over whatever the nextgen consoles will have.

Read, at 300W and 7nm, AMD is competing to a 12nm/215W part... without taking into account the extra silicon in the later. Your argument that GCN doesn't need that is not correct, because using an unit to do tensor operations in the Radeon VII would still use compute resources. That's my whole point.

Avatar image for pc_rocks
PC_Rocks

8603

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#256 PC_Rocks
Member since 2018 • 8603 Posts

@fedor: @Diddies:

Why are you guys replying to uninformed troll cows like Blackhaired. Haven'y you realized by now he doesn't know jacksh*t and is so far up Sony's a$$ to have sensible conversation. He literally makes his own facts as he goes along.

Avatar image for pc_rocks
PC_Rocks

8603

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#257 PC_Rocks
Member since 2018 • 8603 Posts

@gamecubepad said:

4k gaming on the PS4 Pro has 150W total system power consumption. Same game is 150W on PS4. Both systems were $399 competing against $500 systems from MS.

So not only does the total system need to consume only around 150W, it also needs to fit within a ~$380 BoM. Next gen systems will have something like 8-core 3rd-gen Ryzen CPU, 16GB GDDR6, 2GB sideport DDR4, 2TB HDD, and UHD Blu-Ray. Whatever is leftover from that power consumption and cost will determine what GPU they can include.

These are rumors of course, same rumors that said no 7nm Vega for gamers, so that was wrong, but they give a decent idea of where a PS5 GPU could slot. P.S.-Given the 7nm Radeon VII, these power consumption and price points seem very hopeful.

RX 3080Navi 108GB GDDR6150WRTX 2070/ GTX 1080$249
RX 3070Navi 128GB GDDR6120WGTX 2060/GTX 1070$199

Polaris 10 was also "supposed" to be a 150W part, and they had to downclock it 25% to get it into the PS4 Pro, and on the X1X, they had to use Hovis method and vapor chamber cooler with 384-bit memory bus to get stock performance. I don't think $399 gets you Hovis and vapor chamber cooler. Also, RTX 2060 is a $350 card, so they won't be hitting that performance with a $199 card now that we've seen their 7nm Vega pricing.

One thing to keep in mind here is that Radeon VII and other AMD cards have been using HBM2 to decrease their power draw. If the go with GDDR6 on conoles their TDP would be higher and if they go with HBM2, price would be higher.

Avatar image for mrbojangles25
mrbojangles25

60619

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#258 mrbojangles25
Member since 2005 • 60619 Posts

My guess for next gen is console business as usual; putting the cart before the horse: tried to do 4K before they could do 1080 correctly, and tried to do 1080 correctly before...well they could never really do 1080 well.

My prediction is this: 4K is "here" (not really...) in their minds, sort of like George W. Bush going "mission accomplished", so they're probably going to latch onto some buzzword--my guess is "ray tracing"--and convince the plebs through constant and incessant advertising that they need this tech. As a result, games will suffer, customers will suffer, but they won't notice or care because they've been told something is a standard now and to not have it is crazy.

@vfighter said:

So amazing how you nerds can see into the future.

Only by looking into what has happened in the past...plus don't you know PC's do everything? Including predict the future. Also our moms told us so.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#259 Juub1990
Member since 2013 • 12622 Posts

@loco145: Dude stop replying to @ronvalencia.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#260  Edited By ronvalencia
Member since 2008 • 29612 Posts

@loco145 said:

On GCN, operations aren't blocked, but still use resources. That's the whole point, the extra silicon in Turing is not death, it gives a performance advantage over whatever the nextgen consoles will have.

Read, at 300W and 7nm, AMD is competing to a 12nm/215W part... without taking into account the extra silicon in the later. Your argument that GCN doesn't need that is not correct, because using an unit to do tensor operations in the Radeon VII would still use compute resources. That's my whole point.

1. VII still has excess TFLOPS for it's rasterization power.

This is shown by Vega 56 at 1710 Mhz with 12 TFLOPS beating ASUS Strix Vega 64 at 1590Mhz with 13 TFLOPS

De-noise is just pixel reconstruction pass via stream processors.

Loading Video...

2. With Pascal, NVIDIA has rasterization superiority via high clock speed with less equivalent CU count.

Titan XP has equivalent to 60 CU with 6 raster engines and 96 ROPS, 240 TMU

GTX 1080 Ti has equivalent to 56 CU with 6 raster engines and 88 ROPS up to 1800 Mhz (usually around +1700Mhz) , 3MB L2 cache with ~1TB/s before DCC (can boost by 1.8X), 225 TMU

GTX 1080 has equivalent to 36 CU with 4 raster engines and 64 ROPS up to 1800 Mhz, 2MB L2 cache

GTX 1070 has equivalent to 30 CU with 3 raster engines and 64 ROPS up to 1800 Mhz, ~1.7MB L2 cache

VS

Vega 64 has 64 CU with 4 raster engines and 64 ROPS up to 1536 Mhz, 4MB L2 cache with 1.5TB/s, 256 TMU

VII has 60 CU with 4 raster engines and 64 ROPS up to 1800 Mhz, at least 4MB L2 cache with 1.8TB/s, 240 TMU

Cryptocurrency compute shader workloads doesn't use raster engines and ROPS read/write i.e. TMUs are use for read.write, hence the reason for Vega 64 rivalling GTX 1080 TI.

If higher TMUs count are use for workaround for ROPS bound, AMD should increase ROPS count (debunks any argument AMD GPU's 64 ROPS are not bound).

It's better for AMD to lower CU count (reduce power consumption) and increase clock speed (improve raster performance).

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#261  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@loco145: Dude stop replying to @ronvalencia.

Fuk0ff, who do you think you are? You're not a fukin moderator.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#262 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Stop spouting nonsense. You’re a fraud.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#263  Edited By ronvalencia
Member since 2008 • 29612 Posts

@pc_rocks said:
@gamecubepad said:

4k gaming on the PS4 Pro has 150W total system power consumption. Same game is 150W on PS4. Both systems were $399 competing against $500 systems from MS.

So not only does the total system need to consume only around 150W, it also needs to fit within a ~$380 BoM. Next gen systems will have something like 8-core 3rd-gen Ryzen CPU, 16GB GDDR6, 2GB sideport DDR4, 2TB HDD, and UHD Blu-Ray. Whatever is leftover from that power consumption and cost will determine what GPU they can include.

These are rumors of course, same rumors that said no 7nm Vega for gamers, so that was wrong, but they give a decent idea of where a PS5 GPU could slot. P.S.-Given the 7nm Radeon VII, these power consumption and price points seem very hopeful.

RX 3080Navi 108GB GDDR6150WRTX 2070/ GTX 1080$249
RX 3070Navi 128GB GDDR6120WGTX 2060/GTX 1070$199

Polaris 10 was also "supposed" to be a 150W part, and they had to downclock it 25% to get it into the PS4 Pro, and on the X1X, they had to use Hovis method and vapor chamber cooler with 384-bit memory bus to get stock performance. I don't think $399 gets you Hovis and vapor chamber cooler. Also, RTX 2060 is a $350 card, so they won't be hitting that performance with a $199 card now that we've seen their 7nm Vega pricing.

One thing to keep in mind here is that Radeon VII and other AMD cards have been using HBM2 to decrease their power draw. If the go with GDDR6 on conoles their TDP would be higher and if they go with HBM2, price would be higher.

PS4 Pro's power consumption for Infamous First Light 4K 155 watts.

-----

https://www.anandtech.com/show/11992/the-xbox-one-x-review/6

Xbox One X was fitted with Gold rated power supply and power consumption reached 172 watts.

http://energyusecalculator.com/electricity_gameconsole.htm

Xbox One X reached 180 watts

Game consoles must avoid exponential power consumption curve associated with higher clock speed.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#264  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: Stop spouting nonsense. You’re a fraud.

You're the real fraud.

https://gpuopen.com/using-sub-dword-addressing-on-amd-gpus-with-rocm/

Fiji and Polaris IP with pack math features.

https://gpucuriosity.wordpress.com/2017/09/10/xbox-one-xs-render-backend-2mb-render-cache-size-advantage-over-the-older-gcns/

CPU has lower latency

Jaguar CPU's layout modified toward Ryzen's CCX design instead of old Jaguar horizontal chain design

You fuk0ff

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#265  Edited By Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Dude, stop lying. You’re wrong. Just admit and move on. Don’t pretend you know what you’re talking about.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#267  Edited By Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: And you were wrong about the whole comparison. Don’t make me dig up the thread where you believed Rise of the Tomb Raider would be 4K/60fps on X1X. Or that time you believe AC:Origins would be 4K60fps too.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#268  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: And you were wrong about the whole comparison. Don’t make me dig up the thread where you believed Rise of the Tomb Raider would be 4K/60fps on X1X. Or that time you believe AC:Origins would be 4K60fps too.

Do it.

https://wccftech.com/riseof-the-tomb-raider-xb1x-native4k/

Native 4K: (full 3840 by 2160) for highest fidelity resolution

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#269 Juub1990
Member since 2013 • 12622 Posts
@ronvalencia said:
@Juub1990 said:

@ronvalencia: And you were wrong about the whole comparison. Don’t make me dig up the thread where you believed Rise of the Tomb Raider would be 4K/60fps on X1X. Or that time you believe AC:Origins would be 4K60fps too.

Do it.

https://www.gamespot.com/forums/system-wars-314159282/shadow-of-the-tomb-raider-xbox-one-x-version-has-4-33431465/?page=1

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#270 ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:
@ronvalencia said:
@Juub1990 said:

@ronvalencia: And you were wrong about the whole comparison. Don’t make me dig up the thread where you believed Rise of the Tomb Raider would be 4K/60fps on X1X. Or that time you believe AC:Origins would be 4K60fps too.

Do it.

https://www.gamespot.com/forums/system-wars-314159282/shadow-of-the-tomb-raider-xbox-one-x-version-has-4-33431465/?page=1

You're a fool, I didn't made the fuking claim. The claim is made by the developer.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#271 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: You were defending it and using BF2 as a basis lol.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#272  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: You were defending it and using BF2 as a basis lol.

That's assuming the programmer for the said game is correct and I cited the closest alternative example.

You can’t accept that X1X is rivalling GTX 1070 and obliterating a GTX 1060 in Far Cry 5 at 4k

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#273 Juub1990
Member since 2013 • 12622 Posts

@ronvalencia: Don’t change the topic and quit lying. I don’t care about the 1070. We told you 4K/60fps wasn’t happening for this game on Xbox One X and you tried to defend it. You used a completely inadmissible game as a basis for comparison.

Shows how much you know. You talk a lot but don’t ever deliver.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#274 michaelmikado
Member since 2019 • 406 Posts

Specs prediction:

8 core/16 threads with 40cu Zen2/Navi APU 10.2-10.5 TFlops

16GB GDDR5 4GB HBM2 on-die

onboard 32-64GB NAND or eMMC storage

Optional Ext Drive/SD card/Optical Bluray

$299-$349 Full back compatible with PS4. Possibly some PS3 game. Could release limited in 2019 with full release 2020.

No need to design games specifically for it, will play PS4 games in up-rez. Most games will be cross gen 1-2 years following release.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#276  Edited By michaelmikado
Member since 2019 • 406 Posts

The

@pc_rocks said:
@gamecubepad said:

4k gaming on the PS4 Pro has 150W total system power consumption. Same game is 150W on PS4. Both systems were $399 competing against $500 systems from MS.

So not only does the total system need to consume only around 150W, it also needs to fit within a ~$380 BoM. Next gen systems will have something like 8-core 3rd-gen Ryzen CPU, 16GB GDDR6, 2GB sideport DDR4, 2TB HDD, and UHD Blu-Ray. Whatever is leftover from that power consumption and cost will determine what GPU they can include.

These are rumors of course, same rumors that said no 7nm Vega for gamers, so that was wrong, but they give a decent idea of where a PS5 GPU could slot. P.S.-Given the 7nm Radeon VII, these power consumption and price points seem very hopeful.

RX 3080Navi 108GB GDDR6150WRTX 2070/ GTX 1080$249
RX 3070Navi 128GB GDDR6120WGTX 2060/GTX 1070$199

Polaris 10 was also "supposed" to be a 150W part, and they had to downclock it 25% to get it into the PS4 Pro, and on the X1X, they had to use Hovis method and vapor chamber cooler with 384-bit memory bus to get stock performance. I don't think $399 gets you Hovis and vapor chamber cooler. Also, RTX 2060 is a $350 card, so they won't be hitting that performance with a $199 card now that we've seen their 7nm Vega pricing.

One thing to keep in mind here is that Radeon VII and other AMD cards have been using HBM2 to decrease their power draw. If the go with GDDR6 on conoles their TDP would be higher and if they go with HBM2, price would be higher.

Why not just go with 4-8GB of HBM2 on die and 12-16GB GDDDR5 to keep costs down? I think I read GDDR5 is half the price of GDDR6 and with a bank of 8GB HBM2 you’re good on bandwidth.

Avatar image for that_old_guy
That_Old_Guy

1233

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#277 That_Old_Guy
Member since 2018 • 1233 Posts

HOLY CHRIST!!!

PC tech is more advanced than consoles?!?!

Is water still wet!?!??

Avatar image for pc_rocks
PC_Rocks

8603

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#278  Edited By PC_Rocks
Member since 2018 • 8603 Posts

@michaelmikado said:

The

@pc_rocks said:
@gamecubepad said:

4k gaming on the PS4 Pro has 150W total system power consumption. Same game is 150W on PS4. Both systems were $399 competing against $500 systems from MS.

So not only does the total system need to consume only around 150W, it also needs to fit within a ~$380 BoM. Next gen systems will have something like 8-core 3rd-gen Ryzen CPU, 16GB GDDR6, 2GB sideport DDR4, 2TB HDD, and UHD Blu-Ray. Whatever is leftover from that power consumption and cost will determine what GPU they can include.

These are rumors of course, same rumors that said no 7nm Vega for gamers, so that was wrong, but they give a decent idea of where a PS5 GPU could slot. P.S.-Given the 7nm Radeon VII, these power consumption and price points seem very hopeful.

RX 3080Navi 108GB GDDR6150WRTX 2070/ GTX 1080$249
RX 3070Navi 128GB GDDR6120WGTX 2060/GTX 1070$199

Polaris 10 was also "supposed" to be a 150W part, and they had to downclock it 25% to get it into the PS4 Pro, and on the X1X, they had to use Hovis method and vapor chamber cooler with 384-bit memory bus to get stock performance. I don't think $399 gets you Hovis and vapor chamber cooler. Also, RTX 2060 is a $350 card, so they won't be hitting that performance with a $199 card now that we've seen their 7nm Vega pricing.

One thing to keep in mind here is that Radeon VII and other AMD cards have been using HBM2 to decrease their power draw. If the go with GDDR6 on conoles their TDP would be higher and if they go with HBM2, price would be higher.

Why not just go with 4-8GB of HBM2 on die and 12-16GB GDDDR5 to keep costs down? I think I read GDDR5 is half the price of GDDR6 and with a bank of 8GB HBM2 you’re good on bandwidth.

8GB HBM is still expensive and so is GDDR6 but it's cheaper than GDDR5. In short both are expensive for $400 consoles. Then going with GDDR5 brings you the problem of low bandwidth.

In terms of bandwidth HBM2 > GDDR6 >>> GDDR5

Price HBM2 >> GDDR6 >> GDDR5

Power draw GDDR5 > GDDR6 > HBM2

Not to mention HBM2 also eats the silicon budget for that substrate.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#279  Edited By michaelmikado
Member since 2019 • 406 Posts

@pc_rocks said:@michaelmikado said: The

@pc_rocks said: @gamecubepad said: 4k gaming on the PS4 Pro has 150W total system power consumption. Same game is 150W on PS4. Both systems were $399 competing against $500 systems from MS.

So not only does the total system need to consume only around 150W, it also needs to fit within a ~$380 BoM. Next gen systems will have something like 8-core 3rd-gen Ryzen CPU, 16GB GDDR6, 2GB sideport DDR4, 2TB HDD, and UHD Blu-Ray. Whatever is leftover from that power consumption and cost will determine what GPU they can include.

These are rumors of course, same rumors that said no 7nm Vega for gamers, so that was wrong, but they give a decent idea of where a PS5 GPU could slot. P.S.-Given the 7nm Radeon VII, these power consumption and price points seem very hopeful.

RX 3080Navi 108GB GDDR6150WRTX 2070/ GTX 1080$249RX 3070Navi 128GB GDDR6120WGTX 2060/GTX 1070$199Polaris 10 was also "supposed" to be a 150W part, and they had to downclock it 25% to get it into the PS4 Pro, and on the X1X, they had to use Hovis method and vapor chamber cooler with 384-bit memory bus to get stock performance. I don't think $399 gets you Hovis and vapor chamber cooler. Also, RTX 2060 is a $350 card, so they won't be hitting that performance with a $199 card now that we've seen their 7nm Vega pricing.

One thing to keep in mind here is that Radeon VII and other AMD cards have been using HBM2 to decrease their power draw. If the go with GDDR6 on conoles their TDP would be higher and if they go with HBM2, price would be higher.

Why not just go with 4-8GB of HBM2 on die and 12-16GB GDDDR5 to keep costs down? I think I read GDDR5 is half the price of GDDR6 and with a bank of 8GB HBM2 you’re good on bandwidth.

8GB HBM is still expensive and so is GDDR6 but it's cheaper than GDDR5. In short both are expensive for $400 consoles. Then going with GDDR5 brings you the problem of low bandwidth.

In terms of bandwidth HBM2 > GDDR6 >>> GDDR5

Price HBM2 >> GDDR6 >> GDDR5

Power draw GDDR5 > GDDR6 > HBM2

Not to mention HBM2 also eats the silicon budget for that substrate.

But your scenario only holds true if they fully commit to one type of Vram. A small stack of HBM2 can easily be put on die which further reduces TDP, heat, etc. obviously going full HBM2 would be cost prohibitive but no ones suggesting that. Further to reduce costs and power draw were more like to see a small bank of flash mem, likely 32-64GB of eMM than to see a Blu-ray drive. If even bet they would make the HDD completely optional if they had onboard emm. Consoles since the X360 have had smaller banks of high bandwidth ram. The PS4 having completely unified high bandwidth RAM was the exception, not the rule.

I only suggested GDDR5 for cost reasons with HBM2 offsetting the bandwidth constraints, but pairing a small HBM2 stack on die with a healthy amount of GDDR6 was always the way I assumed they go as it’s gives the best all around cost/performance/TDP ratios.

Avatar image for boxrekt
BoxRekt

2425

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#280 BoxRekt
Member since 2019 • 2425 Posts

@michaelmikado: You're going to be disappointing if you're expecting PS5 is going to be less than $400 much less $500 which it will most likely be.

A little delusional my friend, PS4 Pro is still $400 at 4TF, You expect a 10+TF system, with BC off the shelf or less than $400? Stop drinking the coolaid bud.

PS5 will be no less than $500 and so will the next xbox.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#281 michaelmikado
Member since 2019 • 406 Posts

@boxrekt said:

@michaelmikado: You're going to be disappointing if you're expecting PS5 is going to be less than $400 much less $500 which it will most likely be.

A little delusional my friend, PS4 Pro is still $400 at 4TF, You expect a 10+TF system, with BC off the shelf or less than $400? Stop drinking the coolaid bud.

PS5 will be no less than $500 and so will the next xbox.

We will see who’s right soon , but theres nothing delusional about it if you’re following AMD hardware trends. They have the 2200g which is 1.3 TFlops for under $100 and the 2400G with is 1.72 TFlops for around $150. These are 4 core APUs on 14nm and 7nm variant based on Zen 2/3 and Navi would be expected to at least double the core count and raw TFlops at the same price. Just like previously, the PS5 and next Xbox will be based on whatever parts/ APUs they have currently have in their pipelines. The main guts of the systems will likely be under $200. I’m also betting neither will invest heavily in optical drives, or even HDDs If they are included at all. Further the PS4 pro isnt sold at a loss like initial console generations are. Depedning on how they care configuring these consoles, next gen console could easily cost manufacturers $300-350 because it looks like they are using mostly off the shelf parts rather than full on custom designs. Using the PS4 pro as an example of a new console generation launch is probably one of the worst and most delusional things you could do.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#282 ronvalencia
Member since 2008 • 29612 Posts

@Juub1990 said:

@ronvalencia: Stop spouting nonsense. You’re a fraud.

You fuk0off.

Avatar image for pc_rocks
PC_Rocks

8603

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#283 PC_Rocks
Member since 2018 • 8603 Posts

@michaelmikado:

You said 8GB HBM2 + GDDR5. 8GB HBM2 is still mighty expensive and as I said placing HBM2 will also eat the silicon budget of the APU so they will have less die space for GPU and CPU.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#284 scatteh316
Member since 2004 • 10273 Posts
@pc_rocks said:

@michaelmikado:

You said 8GB HBM2 + GDDR5. 8GB HBM2 is still mighty expensive and as I said placing HBM2 will also eat the silicon budget of the APU so they will have less die space for GPU and CPU.

Why will it? The HBM2 memory controller uses less transistors then a conventional controller and the memory stacks themselves are not part of the same die as the GPU core logic.