Why are xbox fans not happy with the series X power?

  • 193 results
  • 1
  • 2
  • 3
  • 4
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#151  Edited By ronvalencia
Member since 2008 • 29612 Posts

@kazhirai said:
@pc_rocks said:
@ronvalencia said:

The argument for GDDR being better for CPU is useless when PS5 needs to reduce CPU usage to able 2230 Mhz GPU.

CPU processing raster graphics is not optimal.

The argument is not useless. DDR is superior to GDDR for CPU workloads.

Unless you have all of the specifications for timings and relative latencies I'm just not buying it.

For a given generation, AMD's memory access times are the same for DDR3 (A6-3650) and GDDR5 (HD 6850).

Intel memory controller's access times are lower, hence superior over AMD's.

The kicker..

From https://www.computer.org/csdl/journal/si/2019/08/08674801/18IluD0rWjS

GDDR5 and DDR4 have similar latency.

GDDR6 has lower latency when compared to GDDR5 (a claim made by Rambus).

Ryzen 7 3700X has 32 MB L3 cache, hence the above argument is mostly a minor issue outside of PC benchmarks for reaching the best score.

https://www.rambus.com/blog_category/hbm-and-gddr6/

AI-specific hardware has been a catalyst for this tremendous growth, but there are always bottlenecks that must be addressed. A poll of the audience participants found that memory bandwidth was their #1 area for needed focus. Steve and Bill agreed and explored how HBM2E and GDDR6 memory could help advance AI/ML to the next level.

Steve discussed how HBM2E provides unsurpassed bandwidth and capacity, in a very compact footprint, that is a great fit for AI/ML training with deployments in heat- and space-constrained data centers. At the same time, the excellent performance and low latency of GDDR6 memory, built on time-tested manufacturing processes, make it an ideal choice for AI/ML inference which is increasingly implemented in powerful “IoT” devices such as ADAS in cars and trucks.

Avatar image for Xplode_games
Xplode_games

2540

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#152  Edited By Xplode_games
Member since 2011 • 2540 Posts

@tormentos said:
@ronvalencia said:

XSX includes 25% extra memory bandwidth over RX 5700 XT's 448 GB/s.

XSX includes 25% extra memory controllers over RX 5700 XT's 16 memory channels which could lead to 25% extra GPU L2 cache over RX 5700 XT's 4MB L2 cache e.g. 5MB L2 cache.

RX 5700 XT doesn't include extra memory bandwidth over RX 5700, Cow farts!

Add 25% on RX 5700 XT's 37.8 fps results would land on 47.25 fps which is RTX 2080 level.

RX 5700 XT's results are below my monitor's free sync range, hence this GPU is insufficient for smooth 4K frame rates.

My argument wasn't comparing the 5700XT vs the xbox series X,so again you are arguing something no one was.

My comparison was what a 1.8TF gap MEANS PERFORMACE WISE on this 2 GPU's.

I don't care if your argument is based on your frying pan,the argument here i was having and which YOU quoted OPENLY was about the performance delta between this 2 GPU with 1.8TF gap.

Even taking your argument as good 10FPS is nothing man,the PS4 command bigger gap that those in several games,in fact it command some times frames and resolution as well.

You are the one who is not understanding this, please pay attention. You keep arguing that the difference between a 5700 and 5700 XT is small even though it technically has almost two teraflops more in power.

Here is the problem, you are ignoring the fact that ron points out that the XsX has much higher memory bandwidth and more cache so that will increase the gap. So the XsX doesn't only have more teraflops, it also has more memory bandwidth and cache so this is not a good or fair comparison.

Your comparison ironically hurts your own argument. Because what is logical to compare is the PS5's 9.2 teraflop with their overclock to 10.2. This overclock is a good comparison to the 5700 vs 5700 XT because both cards have the same memory bandwidth and l2 cache.

5700 vs 5700 XT

5700 = 448 GB/s memory bandwidth and 7.9 teraflops

5700 XT = 448 GB/s memory bandwidth and 9.75 teraflops

The improvement of the 5700 XT is small because the memory bandwidth does not scale with the increase in teraflops.

PS5 vs PS5 overclock

PS5 = 448 GB/s memory bandwidth and 9.2 teraflops

PS5 OC = 448 GB/s memory bandwidth and 10.2 teraflops

The OC PS5 is not really going to gain much(other than heat) from the overclock as you pointed out in your 5700 vs 5700 XT comparisons.

However, the XsX is a different story because it not only has 12 teraflops but also 560 GB/s of memory bandwidth and higher cache to allow it to scale in performance more effectively causing a much bigger gap. We didn't even get into ray tracing which will be a massive advantage for the XsX with 56 compute units vs 36 for PS5. It's not even close, just stop flipping out and just enjoy the PS5 for what it is, not what you wish it would be.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#153  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Xplode_games said:
@tormentos said:
@ronvalencia said:

XSX includes 25% extra memory bandwidth over RX 5700 XT's 448 GB/s.

XSX includes 25% extra memory controllers over RX 5700 XT's 16 memory channels which could lead to 25% extra GPU L2 cache over RX 5700 XT's 4MB L2 cache e.g. 5MB L2 cache.

RX 5700 XT doesn't include extra memory bandwidth over RX 5700, Cow farts!

Add 25% on RX 5700 XT's 37.8 fps results would land on 47.25 fps which is RTX 2080 level.

RX 5700 XT's results are below my monitor's free sync range, hence this GPU is insufficient for smooth 4K frame rates.

My argument wasn't comparing the 5700XT vs the xbox series X,so again you are arguing something no one was.

My comparison was what a 1.8TF gap MEANS PERFORMACE WISE on this 2 GPU's.

I don't care if your argument is based on your frying pan,the argument here i was having and which YOU quoted OPENLY was about the performance delta between this 2 GPU with 1.8TF gap.

Even taking your argument as good 10FPS is nothing man,the PS4 command bigger gap that those in several games,in fact it command some times frames and resolution as well.

You are the one who is not understanding this, please pay attention. You keep arguing that the difference between a 5700 and 5700 XT is small even though it technically has almost two teraflops more in power.

Here is the problem, you are ignoring the fact that ron points out that the XsX has much higher memory bandwidth and more cache so that will increase the gap. So the XsX doesn't only have more teraflops, it also has more memory bandwidth and cache so this is not a good or fair comparison.

Your comparison ironically hurts your own argument. Because what is logical to compare is the PS5's 9.2 teraflop with their overclock to 10.2. This overclock is a good comparison to the 5700 vs 5700 XT because both cards have the same memory bandwidth and l2 cache.

5700 vs 5700 XT

5700 = 448 GB/s memory bandwidth and 7.9 teraflops

5700 XT = 448 GB/s memory bandwidth and 9.75 teraflops

The improvement of the 5700 XT is small because the memory bandwidth does not scale with the increase in teraflops.

PS5 vs PS5 overclock

PS5 = 448 GB/s memory bandwidth and 9.2 teraflops

PS5 OC = 448 GB/s memory bandwidth and 10.2 teraflops

The OC PS5 is not really going to gain much(other than heat) from the overclock as you pointed out in your 5700 vs 5700 XT comparisons.

However, the XsX is a different story because it not only has 12 teraflops but also 560 GB/s of memory bandwidth and higher cache to allow it to scale in performance more effectively causing a much bigger gap. We didn't even get into ray tracing which will be a massive advantage for the XsX with 56 compute units vs 36 for PS5. It's not even close, just stop flipping out and just enjoy the PS5 for what it is, not what you wish it would be.

Budgetary issues interfere with optimal memory configuration which is used as a technical debate.

NVIDIA updated RTX 2080 Super FE with GDDR6-15600 memory modules and Nvidia and target audience has less budgetary issues when compared to AMD and target audience.

AMD/Sony's budgetary issues in the technical debate are getting annoying.

Avatar image for navyguy21
navyguy21

17931

Forum Posts

0

Wiki Points

0

Followers

Reviews: 10

User Lists: 0

#154 navyguy21
Member since 2003 • 17931 Posts

@Xplode_games: But dat SSD doe...

Avatar image for Xplode_games
Xplode_games

2540

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#155 Xplode_games
Member since 2011 • 2540 Posts

@navyguy21 said:

@Xplode_games: But dat SSD doe...

SSD sectors powered by blu rays emitted from the hidden sun of power!

Avatar image for WitIsWisdom
WitIsWisdom

10448

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#156 WitIsWisdom
Member since 2007 • 10448 Posts

@sealionact said:

@joshrmeyer: In order for your post to be true, you'd have to tell me which titles Sony are launching for ps5...because so far, I see nothing that suggests they will be killing it. Also, Hellblade 2 would need to disappear because that doesnt fit into your 12tfp/1.8tflp scenario...its a series x/pc only game.

While true it is rumored to be very early in development and probably won't release for at least 2-3 years.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#157  Edited By tormentos
Member since 2003 • 33793 Posts

@Xplode_games said:

You are the one who is not understanding this, please pay attention. You keep arguing that the difference between a 5700 and 5700 XT is small even though it technically has almost two teraflops more in power.

Here is the problem, you are ignoring the fact that ron points out that the XsX has much higher memory bandwidth and more cache so that will increase the gap. So the XsX doesn't only have more teraflops, it also has more memory bandwidth and cache so this is not a good or fair comparison.

Your comparison ironically hurts your own argument. Because what is logical to compare is the PS5's 9.2 teraflop with their overclock to 10.2. This overclock is a good comparison to the 5700 vs 5700 XT because both cards have the same memory bandwidth and l2 cache.

5700 vs 5700 XT

5700 = 448 GB/s memory bandwidth and 7.9 teraflops

5700 XT = 448 GB/s memory bandwidth and 9.75 teraflops

The improvement of the 5700 XT is small because the memory bandwidth does not scale with the increase in teraflops.

PS5 vs PS5 overclock

PS5 = 448 GB/s memory bandwidth and 9.2 teraflops

PS5 OC = 448 GB/s memory bandwidth and 10.2 teraflops

The OC PS5 is not really going to gain much(other than heat) from the overclock as you pointed out in your 5700 vs 5700 XT comparisons.

However, the XsX is a different story because it not only has 12 teraflops but also 560 GB/s of memory bandwidth and higher cache to allow it to scale in performance more effectively causing a much bigger gap. We didn't even get into ray tracing which will be a massive advantage for the XsX with 56 compute units vs 36 for PS5. It's not even close, just stop flipping out and just enjoy the PS5 for what it is, not what you wish it would be.

Ronvalencia is even more deluded that you.

Fact is the xbox series X doesn't have much higher bandwidth it has 25% faster ram in 10GB while the PS5 has 33% faster ram in 6GB of ram is not even a total win for the xbox.

It has 18% more GPU power,bandwidth will NOT increase power it will allow you to take advantage of what you have this is very simple.

I you have 2 machines with 12TF one has 448GB/s and the other 560GB/s its obvious the one with more bandwidth will take more advantage but if you have 2 machines one has 12TF and 560Gb/s and the other has 12TF but has 800GB/s both will perform the same taking into account that 560Gb/s is enough to exploit 12TF.

You can give 560Gb/s to the xbox one vs the PS4 and still the PS4 would beat its ass because power come from CU and speed not from bandwdith the bandwidth is basically a highway for cars and what you drive on it is what matter as long as you are not getting serverly cripple.

My comparison of the 5700XT vs the 5700 show a clean gap under the same conditions and 1.8TF produces a small gap period,that is what you will get you will not get gaps like the xbox one X over the pro or PS4 over the xbox one because the series X doesn't have the same gap in power.

So it is 18% with faster bandwidth in one side and slower ram in other.

PS4 was 40% and faster bandwidth as well.

The xbox one X 45% + faster bandwidth + aditional 4GB of ram.

The xbox series X will perform better than isn't even debatable in fact it should not be period,but the gap will not be big,you people are setting your self for disappoinment.

With just 25% more bandwidth the xbox will not be able to deliver a massive gap in raytracing as raytracing has a high cost to performance and bandwidth,abusing it may even cripple the xbox resolution or frames.

How can the xbox pass 45% more Raytracing (calculated by difference in CU) + 18.3% GPU power over 25% more bandwidth in just 10GB/s.

@navyguy21 said:

@Xplode_games: But dat SSD doe...

Is funny how you jump into this shit and try to make seen like you are unbiased.

@navyguy21 said:

900p is fine if it means it was lowered to maintain a steady frame rate.

1080p and 20-30 fps is useless to me, how can you enjoy the game with fluctuating frame rate?

Devs should just stop announcing the frame rate and just focus on making great games FIRST, then crank the resolution up as far as it will go without compromising gameplay.

Id take 720p over 900p if it meant having a better game.

Big Rigs at 1440p is still a terrible game.

https://www.gamespot.com/forums/system-wars-314159282/is-900p-next-gen-enough-for-you-31698447/?page=2

This is you on one of your many shitty post damage controlling freaking 720p and 900p on xbox one,so spare of your total hypocricy cow will be ok after all we don't have to put up with shitty 720p for freaking $100 dollars more like lemmings did.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#158  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@Xplode_games said:

You are the one who is not understanding this, please pay attention. You keep arguing that the difference between a 5700 and 5700 XT is small even though it technically has almost two teraflops more in power.

Here is the problem, you are ignoring the fact that ron points out that the XsX has much higher memory bandwidth and more cache so that will increase the gap. So the XsX doesn't only have more teraflops, it also has more memory bandwidth and cache so this is not a good or fair comparison.

Your comparison ironically hurts your own argument. Because what is logical to compare is the PS5's 9.2 teraflop with their overclock to 10.2. This overclock is a good comparison to the 5700 vs 5700 XT because both cards have the same memory bandwidth and l2 cache.

5700 vs 5700 XT

5700 = 448 GB/s memory bandwidth and 7.9 teraflops

5700 XT = 448 GB/s memory bandwidth and 9.75 teraflops

The improvement of the 5700 XT is small because the memory bandwidth does not scale with the increase in teraflops.

PS5 vs PS5 overclock

PS5 = 448 GB/s memory bandwidth and 9.2 teraflops

PS5 OC = 448 GB/s memory bandwidth and 10.2 teraflops

The OC PS5 is not really going to gain much(other than heat) from the overclock as you pointed out in your 5700 vs 5700 XT comparisons.

However, the XsX is a different story because it not only has 12 teraflops but also 560 GB/s of memory bandwidth and higher cache to allow it to scale in performance more effectively causing a much bigger gap. We didn't even get into ray tracing which will be a massive advantage for the XsX with 56 compute units vs 36 for PS5. It's not even close, just stop flipping out and just enjoy the PS5 for what it is, not what you wish it would be.

Ronvalencia is even more deluded that you.

Fact is the xbox series X doesn't have much higher bandwidth it has 25% faster ram in 10GB while the PS5 has 33% faster ram in 6GB of ram is not even a total win for the xbox.

It has 18% more GPU power,bandwidth will NOT increase power it will allow you to take advantage of what you have this is very simple.

I you have 2 machines with 12TF one has 448GB/s and the other 560GB/s its obvious the one with more bandwidth will take more advantage but if you have 2 machines one has 12TF and 560Gb/s and the other has 12TF but has 800GB/s both will perform the same taking into account that 560Gb/s is enough to exploit 12TF.

You can give 560Gb/s to the xbox one vs the PS4 and still the PS4 would beat its ass because power come from CU and speed not from bandwdith the bandwidth is basically a highway for cars and what you drive on it is what matter as long as you are not getting serverly cripple.

My comparison of the 5700XT vs the 5700 show a clean gap under the same conditions and 1.8TF produces a small gap period,that is what you will get you will not get gaps like the xbox one X over the pro or PS4 over the xbox one because the series X doesn't have the same gap in power.

So it is 18% with faster bandwidth in one side and slower ram in other.

PS4 was 40% and faster bandwidth as well.

The xbox one X 45% + faster bandwidth + aditional 4GB of ram.

The xbox series X will perform better than isn't even debatable in fact it should not be period,but the gap will not be big,you people are setting your self for disappoinment.

With just 25% more bandwidth the xbox will not be able to deliver a massive gap in raytracing as raytracing has a high cost to performance and bandwidth,abusing it may even cripple the xbox resolution or frames.

How can the xbox pass 45% more Raytracing (calculated by difference in CU) + 18.3% GPU power over 25% more bandwidth in just 10GB/s.

That's a wrong argument when you actually don't know the specific memory usage factors

Games don't operate without CPU memory storage, cow dung. Also, PS5's CPU usage is reduced to enable GPU to reach 2230Mhz.

For XSX CPU, there's 4MB L2 cache and at least 8MB L3 cache (from mobile Ryzen 7 4800 example), hence the total L2/L3 cache size is 12 MB which can easily handle current game console Gears 5's CPU game logic workload.

There's a direct link between CPU to GPU.

XSX doesn't have PC's CPU driven DirectX12 API calls to GPU native instruction translation layer e.g. GPU's DirectX12 microcode engine handles the translation layer.

IF CPU's 10GB/s equates to 30hz, then 120 hz version scales to 40GB/s (close to 128bit DDR4-2400).

XSX GPU has at least 4MB L2 cache and 20 memory controllers could yield 5MB L2 cache.

5MB L2 cache is 25% higher than RX 5700 XT's 4MB L2 cache. L2 cache storage could be higher due to RDNA 2's RT cores. This design detail is not yet released to the public.

Render targets are the main workload for the highest memory bandwidth usage with smallish memory storage requirements.

For example

Scale render target's 290 MB usage to 4K resolution would land on 1.16 GB. Note why XBO's 32 MB ESRAM is not enough for non-tiled frame buffers (aka render targets).

Scale render target's 290 MB usage to double 4K resolution would land on 2.32‬ GB.

XSX's 560 GB/s 10GB optimal VRAM is 25% higher memory bandwidth and 25% higher VRAM storage over RX 5700 XT's 448 GB/s with 8GB VRAM.

Too bad for you, XSX already delivered RTX 2080 class performance for Gears 5 built-in benchmark at PC ultra settings

If 5700 has 7.7 TFLOPS average with 448 GB/s, RX 5700 XT's 9.66 TFLOPS average should have 560 GB/s. 5700 XT's 9.66 TFLOPS average is already gimped at 448GB/s.

XSX's 12.147GB/s should have 672 GB/s memory bandwidth i.e. 14000 Mhz with 384-bit bus. XSX's 320-bit bus is a cost reduction move by MS. Larger L2 cache is the alternative mitigation design path i.e. Larger L2 cache enables thee GPU to hold the data on-chip longer than less L2 cache version.

RX 5700 XT's 4MB L2 cache is without RDNA 2's RT cores design e.g. another 4MB L2 cache pool.

XSX GPU has more than 13 TFLOPS from RT cores, hence then the need for extra cache storage besides 4MB L2 cache for 12.147 TFLOPS raster graphics.

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#160 PC_Rocks
Member since 2018 • 8611 Posts

@ronvalencia said:
@kazhirai said:

Unless you have all of the specifications for timings and relative latencies I'm just not buying it.

For a given generation, AMD's memory access times are the same for DDR3 (A6-3650) and GDDR5 (HD 6850).

Intel memory controller's access times are lower, hence superior over AMD's.

The kicker..

From https://www.computer.org/csdl/journal/si/2019/08/08674801/18IluD0rWjS

GDDR5 and DDR4 have similar latency.

GDDR6 has lower latency when compared to GDDR5 (a claim made by Rambus).

Ryzen 7 3700X has 32 MB L3 cache, hence the above argument is mostly a minor issue outside of PC benchmarks for reaching the best score.

https://www.rambus.com/blog_category/hbm-and-gddr6/

AI-specific hardware has been a catalyst for this tremendous growth, but there are always bottlenecks that must be addressed. A poll of the audience participants found that memory bandwidth was their #1 area for needed focus. Steve and Bill agreed and explored how HBM2E and GDDR6 memory could help advance AI/ML to the next level.

Steve discussed how HBM2E provides unsurpassed bandwidth and capacity, in a very compact footprint, that is a great fit for AI/ML training with deployments in heat- and space-constrained data centers. At the same time, the excellent performance and low latency of GDDR6 memory, built on time-tested manufacturing processes, make it an ideal choice for AI/ML inference which is increasingly implemented in powerful “IoT” devices such as ADAS in cars and trucks.

No one is disputing that GDDR6 has lower latency than GDDR5. The point of discussion is DDR is better than GDDR for CPUs.

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#161 PC_Rocks
Member since 2018 • 8611 Posts

@tormentos:

So, for the 31st time what are the base clocks for PS5's CPU and GPU? Why did Cerney refused to answer it when asked point blank?

In case you still have difficulty understanding it, what's the lowest the clocks can go on each component to compensate the full clocks of other?

Lastly, for the 23rd time, where are the sources for GDDR is better than DDR for CPUs?

Avatar image for Sagemode87
Sagemode87

3438

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#162 Sagemode87
Member since 2013 • 3438 Posts

They're obviously not happy about the minuscule difference since they have to constantly say PS5 is 9.2 instead of 10.3tf. They know 1.7 difference isn't much considering the amount of dimenishing returns these consoles will have.

Avatar image for Pedro
Pedro

73971

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#163 Pedro
Member since 2002 • 73971 Posts

@Sagemode87 said:

They're obviously not happy about the minuscule difference since they have to constantly say PS5 is 9.2 instead of 10.3tf. They know 1.7 difference isn't much considering the amount of dimenishing returns these consoles will have.

Sounds more like the teasing of your 9.2 TFLOPS console is bothering you.😂

Avatar image for sealionact
sealionact

10043

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#164 sealionact
Member since 2014 • 10043 Posts

@Sagemode87: G'wan...make tormy a happy bunny. Point out one single case of a lem on these boards saying they're disappointed with 12tflps.

Take your time.

Avatar image for Pedro
Pedro

73971

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#165 Pedro
Member since 2002 • 73971 Posts

@sealionact said:

@Sagemode87: G'wan...make tormy a happy bunny. Point out one single case of a lem on these boards saying they're disappointed with 12tflps.

Take your time.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#166  Edited By tormentos
Member since 2003 • 33793 Posts

@Pedro:

Seriously in here a place were lemmings use to claim a 40% gap would be magically close by an API?

GTFO.. You are one of those that misteriously have a old ass join date but we're nowhere to be found when the PS4 was owning the Xbox,no you started to post regularly after the 2016 when the X model was coming,so does i_hatelesbians_daily,sealionact and several other strangely coinciding with the mass lemmings rapture which this place saw all of the sudden.😂

The best part is that you PRETEND to be a developer 😂 when you are a MS fanboy,you claimed you owned a PS3 last gen and didn't even know this gen Sony had cross buys with superior graphics.

So spare us the you are just teasing you like most lemmings here CARE.😂

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#167 tormentos
Member since 2003 • 33793 Posts

@pc_rocks:

2.11ghz going by Cerny 10% drop in power yielding a couple % DF claim 3 to 4% on its video I use 5% I even give you something extra.

Now how much frames do you think that will drop in the GPU..lol

Avatar image for Pedro
Pedro

73971

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#168  Edited By Pedro
Member since 2002 • 73971 Posts

@tormentos said:

@pc_rocks:

2.11ghz going by Cerny 10% drop in power yielding a couple % DF claim 3 to 4% on its video I use 5% I even give you something extra.

Now how much frames do you think that will drop in the GPU..lol

Do you have official documentation indicating the base frequencies?

Avatar image for Paddy345
Paddy345

860

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#169 Paddy345
Member since 2007 • 860 Posts

The only way someone wouldn't be happy with next gen systems is if they believed the crap that stated that it would actually be better than an ultra gaming PC

Avatar image for Pedro
Pedro

73971

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#170  Edited By Pedro
Member since 2002 • 73971 Posts

@Paddy345 said:

The only way someone wouldn't be happy with next gen systems is if they believed the crap that stated that it would actually be better than an ultra gaming PC

What is even that even it was true? Ultra is generally a useless and arbitrary setting on PC with the largest performance hit and the smallest visual gains. BTW, I am not disagreeing with your statement.

Avatar image for Paddy345
Paddy345

860

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#171 Paddy345
Member since 2007 • 860 Posts

@Pedro: Ultra Gaming PC is the best of all the components available right now

Avatar image for deactivated-6092a2d005fba
deactivated-6092a2d005fba

22663

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#172 deactivated-6092a2d005fba
Member since 2015 • 22663 Posts

@tormentos said:

@i_p_daily said:

You do enjoy eating shit don't you, i'm just getting tired of feeding it to you. When will that pea brain of yours understand that I don't care about da power, I care about making fun of you cows for the lack of it.

As for your exclusive explanation, why the **** are you telling me for you muppet I showed in my link that Godawfall is not an exclusive as stated by the devs of the game, go tell your fellow cow josh.

And now that's done, time for some more mocking lol...

That is the way you see it,but then again you can't even see your issues with lesbians,so all i can say for sure is that you have a twisted vision of reality.

I explain it to you because i know how limite your mind is,and since you claim ff7 was on xbox when it isnt so.😂

Now dial back your tears.

FF7 is on Xbox, FF7 R is not, and you say my mind is limited, you can't even put a R at the end, and when i'm telling you i'm fucking with you, you still don't get it.

Now refute my claims that FF7 is NOT on Xbox, and while you're at it show me where I said FF7 R is on Xbox, go on I look forward to the dribble that ensues.

I don't have issues with Lesbians, you have issues with others calling a game where the lead character is a Lesbian, the Lesbian of Us 2.

Avatar image for deactivated-6092a2d005fba
deactivated-6092a2d005fba

22663

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#173 deactivated-6092a2d005fba
Member since 2015 • 22663 Posts

I think its time to change the name of tormentos to tormeltdownos.

It has a good ring to it, and its a fact.

Avatar image for deactivated-5efed3ebc2180
deactivated-5efed3ebc2180

923

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#174 deactivated-5efed3ebc2180
Member since 2006 • 923 Posts

@Paddy345 said:

@Pedro: Ultra Gaming PC is the best of all the components available right now

No, that's called "top-end" where you would need to make a new build every 12 months and meanwhile upgrade the components like every 3 months...

Avatar image for ermacness
ermacness

10956

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#175 ermacness
Member since 2005 • 10956 Posts

@i_p_daily:

You are the VERY LAST person to tell ANYONE or ANYTHING to "get a grip". In order for you to suggest that, you have to already have a grip, and from your recent activities here, one could and probably would suggest that you're a LONG ways from "having a grip".

Avatar image for Sevenizz
Sevenizz

6462

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#176 Sevenizz
Member since 2010 • 6462 Posts

Where’s this survey that Xbox fans are not happy with XSX specs?

Avatar image for deactivated-6092a2d005fba
deactivated-6092a2d005fba

22663

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#177 deactivated-6092a2d005fba
Member since 2015 • 22663 Posts

@ermacness said:

@i_p_daily:

You are the VERY LAST person to tell ANYONE or ANYTHING to "get a grip". In order for you to suggest that, you have to already have a grip, and from your recent activities here, one could and probably would suggest that you're a LONG ways from "having a grip".

My recent activities lol. Another day, another cow that doesn't like my comments, no surprise there.

I'm of sound mind, you should be getting tormeltdownos some help though :)

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#178  Edited By ronvalencia
Member since 2008 • 29612 Posts

@pc_rocks said:
@ronvalencia said:

For a given generation, AMD's memory access times are the same for DDR3 (A6-3650) and GDDR5 (HD 6850).

Intel memory controller's access times are lower, hence superior over AMD's.

The kicker..

From https://www.computer.org/csdl/journal/si/2019/08/08674801/18IluD0rWjS

GDDR5 and DDR4 have similar latency.

GDDR6 has lower latency when compared to GDDR5 (a claim made by Rambus).

Ryzen 7 3700X has 32 MB L3 cache, hence the above argument is mostly a minor issue outside of PC benchmarks for reaching the best score.

https://www.rambus.com/blog_category/hbm-and-gddr6/

AI-specific hardware has been a catalyst for this tremendous growth, but there are always bottlenecks that must be addressed. A poll of the audience participants found that memory bandwidth was their #1 area for needed focus. Steve and Bill agreed and explored how HBM2E and GDDR6 memory could help advance AI/ML to the next level.

Steve discussed how HBM2E provides unsurpassed bandwidth and capacity, in a very compact footprint, that is a great fit for AI/ML training with deployments in heat- and space-constrained data centers. At the same time, the excellent performance and low latency of GDDR6 memory, built on time-tested manufacturing processes, make it an ideal choice for AI/ML inference which is increasingly implemented in powerful “IoT” devices such as ADAS in cars and trucks.

No one is disputing that GDDR6 has lower latency than GDDR5. The point of discussion is DDR is better than GDDR for CPUs.

DDR4 has higher memory storage per $$$

GDDR5 has higher memory bandwidth per $$$ which is important for render target workloads.

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#179 PC_Rocks
Member since 2018 • 8611 Posts

@tormentos said:

@pc_rocks:

2.11ghz going by Cerny 10% drop in power yielding a couple % DF claim 3 to 4% on its video I use 5% I even give you something extra.

Now how much frames do you think that will drop in the GPU..lol

Cool, some progress. We have one metric out, good. What about CPU?

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#180 PC_Rocks
Member since 2018 • 8611 Posts
@ronvalencia said:
@pc_rocks said:
@ronvalencia said:

For a given generation, AMD's memory access times are the same for DDR3 (A6-3650) and GDDR5 (HD 6850).

Intel memory controller's access times are lower, hence superior over AMD's.

The kicker..

From https://www.computer.org/csdl/journal/si/2019/08/08674801/18IluD0rWjS

GDDR5 and DDR4 have similar latency.

GDDR6 has lower latency when compared to GDDR5 (a claim made by Rambus).

Ryzen 7 3700X has 32 MB L3 cache, hence the above argument is mostly a minor issue outside of PC benchmarks for reaching the best score.

https://www.rambus.com/blog_category/hbm-and-gddr6/

AI-specific hardware has been a catalyst for this tremendous growth, but there are always bottlenecks that must be addressed. A poll of the audience participants found that memory bandwidth was their #1 area for needed focus. Steve and Bill agreed and explored how HBM2E and GDDR6 memory could help advance AI/ML to the next level.

Steve discussed how HBM2E provides unsurpassed bandwidth and capacity, in a very compact footprint, that is a great fit for AI/ML training with deployments in heat- and space-constrained data centers. At the same time, the excellent performance and low latency of GDDR6 memory, built on time-tested manufacturing processes, make it an ideal choice for AI/ML inference which is increasingly implemented in powerful “IoT” devices such as ADAS in cars and trucks.

No one is disputing that GDDR6 has lower latency than GDDR5. The point of discussion is DDR is better than GDDR for CPUs.

DDR4 has higher memory storage per $$$

GDDR5 has higher memory bandwidth per $$$ which is important for render target workloads.

Irrelevant. The point of discussion is DDR is better than GDDR for CPUs. DO you have a source that says otherwise?

Avatar image for Pedro
Pedro

73971

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#181 Pedro
Member since 2002 • 73971 Posts

@tormentos said:

@Pedro:

Seriously in here a place were lemmings use to claim a 40% gap would be magically close by an API?

GTFO.. You are one of those that misteriously have a old ass join date but we're nowhere to be found when the PS4 was owning the Xbox,no you started to post regularly after the 2016 when the X model was coming,so does i_hatelesbians_daily,sealionact and several other strangely coinciding with the mass lemmings rapture which this place saw all of the sudden.😂

The best part is that you PRETEND to be a developer 😂 when you are a MS fanboy,you claimed you owned a PS3 last gen and didn't even know this gen Sony had cross buys with superior graphics.

So spare us the you are just teasing you like most lemmings here CARE.😂

Tormentos: "I can't counter argue against Pedro, I must do a flashback on irrelevant shit and talk about join date. My PTSD is getting stronger. The 360 and lemmings were mean to me and Pedro wasn't there to see how mean they were. Breath in and try to hide my meltdown."😂🤣

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#182  Edited By ronvalencia
Member since 2008 • 29612 Posts

@pc_rocks said:
@ronvalencia said:

DDR4 has higher memory storage per $$$

GDDR5 has higher memory bandwidth per $$$ which is important for render target workloads.

Irrelevant. The point of discussion is DDR is better than GDDR for CPUs. DO you have a source that says otherwise?

I have shown you DDR4 and GDDR5 latency being similar, hence the difference is $$$ for the focus target area.

DDR4 has better scaling higher memory storage for $$$.

GDDR5 has better scaling higher memory bandwidth for $$$.

GDDR5 is used for Intel Xeon Phi with 57 to 61 CPU cores with AVX-512.

8 CPU coreswith 256-bit AVX v2 Zen 2 are nowhere near the vector math intensity as Intel Xeon Phi with 57 CPU cores with AVX 512.

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#183 PC_Rocks
Member since 2018 • 8611 Posts

@ronvalencia said:

I have shown you DDR4 and GDDR5 latency being similar, hence the difference is $$$ for the focus target area.

DDR4 has better scaling higher memory storage for $$$.

GDDR5 has better scaling higher memory bandwidth for $$$.

GDDR5 is used for Intel Xeon Phi with 57 to 61 CPU cores with AVX-512.

8 CPU coreswith 256-bit AVX v2 Zen 2 are nowhere near the vector math intensity as Intel Xeon Phi with 57 CPU cores with AVX 512.

You further proved my point. GDDR is used for workloads that needs very high bandwidth but can compromise latency. PC CPUs and in general server CPUs don't need that.

Vector math, simulations, AI is another thing.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#184  Edited By ronvalencia
Member since 2008 • 29612 Posts

@pc_rocks said:
@ronvalencia said:

I have shown you DDR4 and GDDR5 latency being similar, hence the difference is $$$ for the focus target area.

DDR4 has better scaling higher memory storage for $$$.

GDDR5 has better scaling higher memory bandwidth for $$$.

GDDR5 is used for Intel Xeon Phi with 57 to 61 CPU cores with AVX-512.

8 CPU coreswith 256-bit AVX v2 Zen 2 are nowhere near the vector math intensity as Intel Xeon Phi with 57 CPU cores with AVX 512.

You further proved my point. GDDR is used for workloads that needs very high bandwidth but can compromise latency. PC CPUs and in general server CPUs don't need that.

Vector math, simulations, AI is another thing.

I already showed DDR4 vs GDDR5 comparison, latency is not compromised.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#185 tormentos
Member since 2003 • 33793 Posts
@pc_rocks said:

Cool, some progress. We have one metric out, good. What about CPU?

The same apply to both sice apply the same 5% frequency drop 3.32ghz so again now many frames do you think the PS5 will drop from such a low downclock.

If you apply PC metrics here you will see its nothing a frame or 2 at best.

@Pedro said:

Tormentos: "I can't counter argue against Pedro, I must do a flashback on irrelevant shit and talk about join date. My PTSD is getting stronger. The 360 and lemmings were mean to me and Pedro wasn't there to see how mean they were. Breath in and try to hide my meltdown."😂🤣

You have no way to justify it which again send you into deflect mode.🤣🤣🤣

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#186 PC_Rocks
Member since 2018 • 8611 Posts

@ronvalencia said:
@pc_rocks said:

You further proved my point. GDDR is used for workloads that needs very high bandwidth but can compromise latency. PC CPUs and in general server CPUs don't need that.

Vector math, simulations, AI is another thing.

As I showed from by DDR4 vs GDDR5 comparison, latency is not compromised.

Nah...you didn't. GDDR5 or any GDDR in general has far more latency than DDR. So, the point sill stands that DDR is better than GDDR for CPUs.

I also understand that scaling is another problem with GDDR hence it hasn't replaced DDR for CPUs and not for any forseeable future.

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#187 PC_Rocks
Member since 2018 • 8611 Posts
@tormentos said:
@pc_rocks said:

Cool, some progress. We have one metric out, good. What about CPU?

The same apply to both sice apply the same 5% frequency drop 3.32ghz so again now many frames do you think the PS5 will drop from such a low downclock.

If you apply PC metrics here you will see its nothing a frame or 2 at best.

Good! That wasn't so hard, now was it?

Only one last thing, still waiting for sources on GDDR is better than DDR for CPUs.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#188 tormentos
Member since 2003 • 33793 Posts

@pc_rocks said:

Good! That wasn't so hard, now was it?

Only one last thing, still waiting for sources on GDDR is better than DDR for CPUs.

Just because you refuse to admit it,doesn't mean i didn't prove it.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#189 ronvalencia
Member since 2008 • 29612 Posts

@pc_rocks said:
@ronvalencia said:
@pc_rocks said:

You further proved my point. GDDR is used for workloads that needs very high bandwidth but can compromise latency. PC CPUs and in general server CPUs don't need that.

Vector math, simulations, AI is another thing.

As I showed from by DDR4 vs GDDR5 comparison, latency is not compromised.

Nah...you didn't. GDDR5 or any GDDR in general has far more latency than DDR. So, the point sill stands that DDR is better than GDDR for CPUs.

I also understand that scaling is another problem with GDDR hence it hasn't replaced DDR for CPUs and not for any forseeable future.

From https://www.computer.org/csdl/journal/si/2019/08/08674801/18IluD0rWjS

GDDR5 and DDR4 have similar latency.

Avatar image for Pedro
Pedro

73971

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#190  Edited By Pedro
Member since 2002 • 73971 Posts

@tormentos said:
@Pedro said:

Tormentos: "I can't counter argue against Pedro, I must do a flashback on irrelevant shit and talk about join date. My PTSD is getting stronger. The 360 and lemmings were mean to me and Pedro wasn't there to see how mean they were. Breath in and try to hide my meltdown."😂🤣

You have no way to justify it which again send you into deflect mode.🤣🤣🤣

Justify what? You can't counter argue and you are still crying about me not being in the SW when the xbox fannies rekt you for over 7 years. They rekt you so bad that you can't even let it go another 7 years later. 😂🤣

"The 360 and lemmings were mean to me and Pedro wasn't there to see how mean they were."😂

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#191 PC_Rocks
Member since 2018 • 8611 Posts

@tormentos said:
@pc_rocks said:

Good! That wasn't so hard, now was it?

Only one last thing, still waiting for sources on GDDR is better than DDR for CPUs.

Just because you refuse to admit it,doesn't mean i didn't prove it.

You didn't prove sh*t.

So, for the 25th time, where are the sources for GDDR is better than DDR for CPUs?

Avatar image for pc_rocks
PC_Rocks

8611

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#192 PC_Rocks
Member since 2018 • 8611 Posts
@ronvalencia said:
@pc_rocks said:
@ronvalencia said:
@pc_rocks said:

You further proved my point. GDDR is used for workloads that needs very high bandwidth but can compromise latency. PC CPUs and in general server CPUs don't need that.

Vector math, simulations, AI is another thing.

As I showed from by DDR4 vs GDDR5 comparison, latency is not compromised.

Nah...you didn't. GDDR5 or any GDDR in general has far more latency than DDR. So, the point sill stands that DDR is better than GDDR for CPUs.

I also understand that scaling is another problem with GDDR hence it hasn't replaced DDR for CPUs and not for any forseeable future.

From https://www.computer.org/csdl/journal/si/2019/08/08674801/18IluD0rWjS

GDDR5 and DDR4 have similar latency.

What about the total overall latency including write? I'm pretty sure the latency of DDR4 and GDDR5 isn't close from multiple sources.

Avatar image for BenjaminBanklin
BenjaminBanklin

11551

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#193 BenjaminBanklin
Member since 2004 • 11551 Posts

Welp, THIS game isn't gonna be running at 4K/60 on Series X.

https://www.windowscentral.com/assassins-creed-valhalla-will-run-least-30fps-xbox-series-x-according-ubisoft

Unless Ubisoft does some crazy nerfing on the graphics. Next gen already busting at the seams.