Those who said the X1X GPU is a match for the 1070 come forward and apologize

Avatar image for ArchoNils2
ArchoNils2

10534

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#551 ArchoNils2
Member since 2005 • 10534 Posts

This thread is awesome, please keep going :D I can't really add much, but I really enjoy reading it :)

Avatar image for crimson_v
Crimson_V

166

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#552 Crimson_V
Member since 2014 • 166 Posts

@commander:

Its specs are public:
GPU Name: Scorpio
Architecture: GCN 4.0
Process Size: 16 nm
Shading Units: 2560
TMUs: 160
ROPs: 32
Compute Units: 40
Pixel Rate: 37.50 GPixel/s
Texture Rate: 187.5 GTexel/s
Floating-point performance: 6,001 GFLOPS
Memory Size: 12288 MB
Memory Type: GDDR5
Memory Bus: 384 bit
Bandwidth: 326.4 GB/s

so it's equivalent to an RX 480 meaning its more than 50% behind the 1070

Avatar image for tdkmillsy
tdkmillsy

6617

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#553  Edited By tdkmillsy
Member since 2003 • 6617 Posts

@crimson_v said:

@commander:

Its specs are public:
GPU Name: Scorpio
Architecture: GCN 4.0
Process Size: 16 nm
Shading Units: 2560
TMUs: 160
ROPs: 32
Compute Units: 40
Pixel Rate: 37.50 GPixel/s
Texture Rate: 187.5 GTexel/s
Floating-point performance: 6,001 GFLOPS
Memory Size: 12288 MB
Memory Type: GDDR5
Memory Bus: 384 bit
Bandwidth: 326.4 GB/s

so it's equivalent to an RX 480 meaning its more than 50% behind the 1070

Xbox One X gpu has

more shading units

more tmu's

more floating point

more memory

much more memory bus

and much more bandwidth

How is it equivalent to a 480

https://www.techpowerup.com/gpudb/2848/radeon-rx-480

At least you stated what you think it is, others just say its not this without sticking their neck out and say what it is.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#554 Juub1990
Member since 2013 • 12622 Posts
@commander said:

sorry but there's no proof the 1070 is 20 percent better at this time.

If they uncap the frame rate or give a detail analysis sure, we can dispute that. I already said it was pointless to compare games with overhead because the frame rate on the X1X is seldom uncapped making comparisons impossible.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#555  Edited By scatteh316
Member since 2004 • 10273 Posts

@tdkmillsy said:
@crimson_v said:

@commander:

Its specs are public:
GPU Name: Scorpio
Architecture: GCN 4.0
Process Size: 16 nm
Shading Units: 2560
TMUs: 160
ROPs: 32
Compute Units: 40
Pixel Rate: 37.50 GPixel/s
Texture Rate: 187.5 GTexel/s
Floating-point performance: 6,001 GFLOPS
Memory Size: 12288 MB
Memory Type: GDDR5
Memory Bus: 384 bit
Bandwidth: 326.4 GB/s

so it's equivalent to an RX 480 meaning its more than 50% behind the 1070

Xbox One X gpu has

more shading units

more tmu's

more floating point

more memory

much more memory bus

and much more bandwidth

How is it equivalent to a 480

https://www.techpowerup.com/gpudb/2848/radeon-rx-480

At least you stated what you think it is, others just say its not this without sticking their neck out and say what it is.

It's around an RX480..... 'Much more bandwidth' as you put it is stretching it as Xbone has to share it with the whole system.

And if you look at benchmarks comparing a Vega 56 vs Vega 64 at the same clocks you'll find there's nothing in it despite Vega 64's shader advantage.

Same as other AMD GPU's.... 290 vs 290x.... 7950 vs 7970..... the top tier cards have never really been faster at the same clocks as the second tier cards and certainly no where near as much as the paper specs would imply as AMD's GPU are terribly inefficient.

At best you can call Scorpio's GPU equal to an RX480 with a mild overclock.

Avatar image for waahahah
waahahah

2462

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#556 waahahah
Member since 2014 • 2462 Posts

@Juub1990 said:

@waahahah: We have to wait until we get the whole rundown. As it stands we don’t know how the X1X version stacks up up not to mention the 1070 still performs better regardless. It’s 20% better.

@commander said:

sorry but there's no proof the 1070 is 20 percent better at this time.

This^, you pointed out 20% and literally said we can't because one of them is capped. Its closer to the 1070 than the 1070 is to the 1080 in this case. And we aren't talking about a world of difference either. I don't expect it to be exact because AMD doesn't have as much processing power but but its so close you seem to have a big issue.

I actually couldn't find a 1060 bench mark at 4k ultra. Seems to be its more than 20% away... Although the RX580 does 47fps... at least in one test. Are these 1070 numbers right?

https://www.youtube.com/watch?time_continue=72&v=ozrnN82sUG4

Avatar image for Zero_epyon
Zero_epyon

20502

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#557 Zero_epyon
Member since 2004 • 20502 Posts

@Juub1990 said:
@commander said:

sorry but there's no proof the 1070 is 20 percent better at this time.

If they uncap the frame rate or give a detail analysis sure, we can dispute that. I already said it was pointless to compare games with overhead because the frame rate on the X1X is seldom uncapped making comparisons impossible.

NX Gamer ran Witcher 3 unpatched on the xbox one x at 1080p. It was able to hit 60fps at times but when in combat or riding through a town, it dipped into the 40's and 30's. The GTX 1070 can do 1080p at high settings at an average of 80+ fps and ultra hovers around 60.

I think DF did something similar to that with AC Unity. Had the same issues, 60fps but never stuck to it when the scenes got intense.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#558 Juub1990
Member since 2013 • 12622 Posts
@waahahah said:

This^, you pointed out 20% and literally said we can't because one of them is capped. Its closer to the 1070 than the 1070 is to the 1080 in this case. And we aren't talking about a world of difference either. I don't expect it to be exact because AMD doesn't have as much processing power but but its so close you seem to have a big issue.

I actually couldn't find a 1060 bench mark at 4k ultra. Seems to be its more than 20% away... Although the RX580 does 47fps... at least in one test. Are these 1070 numbers right?

https://www.youtube.com/watch?time_continue=72&v=ozrnN82sUG4

Then why say it's close when it isn't? The comparison is obviously invalid. Would the X1X average 31? 32fps? Is Depth of Field set to Ultra? Very High? Would it lose 2-3-4 fps if everything was set to Ultra?

RX 580 is what I think the X1X is equivalent to. Right between a 1060 and a 1070. I had previously said between a 980 and a 1070 which is basically the same thing. I expected it to be about 15-20% slower than a GTX 1070 which so far seems to be right where it falls.

According to techpowerup the difference at 1080p between the RX 580 and GTX 1070 is 42% but as we go up in the resolution, the gap gets smaller, much smaller in fact. I don't think the gap between the X1X and 1070 is that big but I also don't think the X1X can compete with a 1070.

Avatar image for commander
commander

16217

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#559  Edited By commander
Member since 2010 • 16217 Posts

@Zero_epyon said:
@Juub1990 said:
@commander said:

sorry but there's no proof the 1070 is 20 percent better at this time.

If they uncap the frame rate or give a detail analysis sure, we can dispute that. I already said it was pointless to compare games with overhead because the frame rate on the X1X is seldom uncapped making comparisons impossible.

NX Gamer ran Witcher 3 unpatched on the xbox one x at 1080p. It was able to hit 60fps at times but when in combat or riding through a town, it dipped into the 40's and 30's. The GTX 1070 can do 1080p at high settings at an average of 80+ fps and ultra hovers around 60.

I think DF did something similar to that with AC Unity. Had the same issues, 60fps but never stuck to it when the scenes got intense.

it will not do that with a similar cpu like the xboxone x

the witcher 3 actually ran better on my I52500 and a gtx 950 than on my i3 4170 and gtx 970.

I can get higher resolution sure with the gtx970, but the framerate will always be worse than the I52500 and gtx 950 at low medium settings.

Avatar image for Xplode_games
Xplode_games

2540

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#560 Xplode_games
Member since 2011 • 2540 Posts

@Zero_epyon said:
@Juub1990 said:
@commander said:

sorry but there's no proof the 1070 is 20 percent better at this time.

If they uncap the frame rate or give a detail analysis sure, we can dispute that. I already said it was pointless to compare games with overhead because the frame rate on the X1X is seldom uncapped making comparisons impossible.

NX Gamer ran Witcher 3 unpatched on the xbox one x at 1080p. It was able to hit 60fps at times but when in combat or riding through a town, it dipped into the 40's and 30's. The GTX 1070 can do 1080p at high settings at an average of 80+ fps and ultra hovers around 60.

I think DF did something similar to that with AC Unity. Had the same issues, 60fps but never stuck to it when the scenes got intense.

You're talking about CPU power there because that was the Witcher unpatched at a low resolution.

Avatar image for Zero_epyon
Zero_epyon

20502

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#561 Zero_epyon
Member since 2004 • 20502 Posts

@Xplode_games: the unpatched version had a dynamic scaler that went between 900p and 1080p. The X allows the scaler to hit 1080p consistently. There are drops outside of town and combat areas as well. They're just not as severe as when you're in a town.

Avatar image for waahahah
waahahah

2462

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#562  Edited By waahahah
Member since 2014 • 2462 Posts

@Juub1990 said:
@waahahah said:

This^, you pointed out 20% and literally said we can't because one of them is capped. Its closer to the 1070 than the 1070 is to the 1080 in this case. And we aren't talking about a world of difference either. I don't expect it to be exact because AMD doesn't have as much processing power but but its so close you seem to have a big issue.

I actually couldn't find a 1060 bench mark at 4k ultra. Seems to be its more than 20% away... Although the RX580 does 47fps... at least in one test. Are these 1070 numbers right?

https://www.youtube.com/watch?time_continue=72&v=ozrnN82sUG4

Then why say it's close when it isn't? The comparison is obviously invalid. Would the X1X average 31? 32fps? Is Depth of Field set to Ultra? Very High? Would it lose 2-3-4 fps if everything was set to Ultra?

RX 580 is what I think the X1X is equivalent to. Right between a 1060 and a 1070. I had previously said between a 980 and a 1070 which is basically the same thing. I expected it to be about 15-20% slower than a GTX 1070 which so far seems to be right where it falls.

According to techpowerup the difference at 1080p between the RX 580 and GTX 1070 is 42% but as we go up in the resolution, the gap gets smaller, much smaller in fact. I don't think the gap between the X1X and 1070 is that big but I also don't think the X1X can compete with a 1070.

From a viewer stand point, its effectively the same when there isn't a cpu bottleneck. Do you think any one cares about 6 fps average? That doesn't include dips. from a nitpicking standpoint there are some differences but when its GPU bound its pretty dam close, and in those cases the end user probably won't notice the difference.

20% is close, I dont' think the 1060 is matching it at that resolution. Although I tried to find a bench mark.

RX 580 is matched pretty equally with the 1060 in a lot of bench marks. Which is why I thought to look that up. My point about the RX 580 being at 47fps its better than the 1080 bench mark i posted.

Avatar image for crimson_v
Crimson_V

166

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#563 Crimson_V
Member since 2014 • 166 Posts

@tdkmillsy:

@tdkmillsy said:
@crimson_v said:

@commander:

Its specs are public:
GPU Name: Scorpio
Architecture: GCN 4.0
Process Size: 16 nm
Shading Units: 2560
TMUs: 160
ROPs: 32
Compute Units: 40
Pixel Rate: 37.50 GPixel/s
Texture Rate: 187.5 GTexel/s
Floating-point performance: 6,001 GFLOPS
Memory Size: 12288 MB
Memory Type: GDDR5
Memory Bus: 384 bit
Bandwidth: 326.4 GB/s

so it's equivalent to an RX 480 meaning its more than 50% behind the 1070

Xbox One X gpu has

more shading units

more tmu's

more floating point

more memory

much more memory bus

and much more bandwidth

How is it equivalent to a 480

https://www.techpowerup.com/gpudb/2848/radeon-rx-480

At least you stated what you think it is, others just say its not this without sticking their neck out and say what it is.

The RX 480 has 4 less CUs, and yet most commercially sold RX 480's have the same floating point performance as the 1X's gpu.

https://www.techpowerup.com/gpudb/2848/radeon-rx-480

This site lists the founders edition which has a lower clockspeed then most other commercially available cards (and even the founder card is just 0.16 GFLOPS behind the 1X's gpu).

The wider memory bus would definitive be a + but sadly its shared memory and the CPU takes up some of that bandwidth and cycles.

Avatar image for Xabiss
Xabiss

4749

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#564 Xabiss
Member since 2012 • 4749 Posts

@Zero_epyon said:

@Xplode_games: the unpatched version had a dynamic scaler that went between 900p and 1080p. The X allows the scaler to hit 1080p consistently. There are drops outside of town and combat areas as well. They're just not as severe as when you're in a town.

You have to also account for that the game was not codded with the Xbox One X in mind. Yes it can scale and do all of that, but when the game was not specifically coded for a certain platform you will not get all of the performance out of the system. It is like boost mode on the Pro, yes games run better but when the are specifically coded to work with the Pro you will get better performance out of it.

Avatar image for Zero_epyon
Zero_epyon

20502

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#565 Zero_epyon
Member since 2004 • 20502 Posts

@Xabiss said:
@Zero_epyon said:

@Xplode_games: the unpatched version had a dynamic scaler that went between 900p and 1080p. The X allows the scaler to hit 1080p consistently. There are drops outside of town and combat areas as well. They're just not as severe as when you're in a town.

You have to also account for that the game was not codded with the Xbox One X in mind. Yes it can scale and do all of that, but when the game was not specifically coded for a certain platform you will not get all of the performance out of the system. It is like boost mode on the Pro, yes games run better but when the are specifically coded to work with the Pro you will get better performance out of it.

Fair point.

Avatar image for Xplode_games
Xplode_games

2540

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#566 Xplode_games
Member since 2011 • 2540 Posts

@Zero_epyon said:

@Xplode_games: the unpatched version had a dynamic scaler that went between 900p and 1080p. The X allows the scaler to hit 1080p consistently. There are drops outside of town and combat areas as well. They're just not as severe as when you're in a town.

The X1X at 1080p is CPU limited, everyone knows that.

Avatar image for appariti0n
appariti0n

5193

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#567  Edited By appariti0n
Member since 2009 • 5193 Posts

@commander: Quote me where I said a 20% performance difference is comparable.

You're the one who pulled the 20% number out of your ass, when in reality the 8600K is currently sitting at a 5% delta from the 7700K in average gaming performance. Not just your cherry picked titles.

Also good job completely ignoring the fact that you can't even keep straight what cpus are being compared, and posted charts with the wrong cpu twice.

So much for your supposed 20% difference. Or is 478 benchmarks now still not a large enough sample size?

Avatar image for tdkmillsy
tdkmillsy

6617

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#568 tdkmillsy
Member since 2003 • 6617 Posts

So we had

1060

1070

1080

480

580

various OC versions of each

benchmarks to show its closer to one card or the other dependant on the game.

Developers claiming one thing or another before and after launch

Professionals (DF for example) claiming its like different cards for different games

Seems practically impossible to accurately say its exactly like a specific PC card, which makes sense cos its not.

If you said its a match for the 1070 for ALL games then yes you should say sorry.

But so should all the others that have claimed it like a specific card.

Avatar image for Dark_sageX
Dark_sageX

3561

Forum Posts

0

Wiki Points

0

Followers

Reviews: 236

User Lists: 0

#569 Dark_sageX
Member since 2003 • 3561 Posts

@tdkmillsy: Not like a specific card per say, more in lines with a specific PC spec that includes a specific card, I.e a system that has a GTX 1060 (combined with a CPU that doesn't cause a bottleneck, like an i5), builds with a GTX 1070 tend to also have high end CPUs and other components (although even mid range CPUs don't bottleneck it).

So when i say it the X ONE X matches a GTX 1060 I also mean including an i5 (or equivalent) in benchmarks. and not a GTX 1070.

Avatar image for neversummer75
neversummer75

1136

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#570 neversummer75
Member since 2006 • 1136 Posts

@Xplode_games: wow.....just.....wow! Dude, get out of your mothers basement and get a job. Better yet, get a girlfriend. A woman's touch will go along way to help you to relax. Your mothers touch does not count. ;) The X is an amazing console. Will it beat out a dedicated gaming system? Nope. Will it come close to matching the experience? Yup. Get a Life.

Avatar image for commander
commander

16217

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#571  Edited By commander
Member since 2010 • 16217 Posts

@appariti0n said:

@commander: Quote me where I said a 20% performance difference is comparable.

You're the one who pulled the 20% number out of your ass, when in reality the 8600K is currently sitting at a 5% delta from the 7700K in average gaming performance. Not just your cherry picked titles.

Also good job completely ignoring the fact that you can't even keep straight what cpus are being compared, and posted charts with the wrong cpu twice.

So much for your supposed 20% difference. Or is 478 benchmarks now still not a large enough sample size?

there are too much cherries there to call it cherry picked. synthetic benchmarks show exactly what the 8600k is , a cpu with 50 percent more cores.

your random benchmark site crap isn't going to change that. Worst bench turbo speed 4.02 percent lmao. I5 8600k goes to 5 ghz on air.

Avatar image for appariti0n
appariti0n

5193

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#572  Edited By appariti0n
Member since 2009 • 5193 Posts

@commander: userbenchmark.com is random shit, but your youtube video with shitty trance music isn't? LOL.

Lemme guess, tomshardware and anandtech are probably just some random shit sites as well? Same with eurogamer.net? Except of course when you tried to use it for your own evidence and got owned.

good lord you reminder me of an anti-vaxxer.

Still waiting for you to quite where I said 20% is comparable too.

Avatar image for commander
commander

16217

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#573 commander
Member since 2010 • 16217 Posts

@appariti0n said:

@commander: userbenchmark.com is random shit, but your youtube video with shitty trance music isn't? LOL.

Lemme guess, tomshardware and anandtech are probably just some random shit sites as well? Same with eurogamer.net? Except of course when you tried to use it for your own evidence and got owned.

good lord you reminder me of an anti-vaxxer.

Still waiting for you to quite where I said 20% is comparable too.

well it's the only benchmark that compares the two overclocked so unless you got any other benchmarks you got no case.

Avatar image for Heil68
Heil68

60833

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#574 Heil68
Member since 2004 • 60833 Posts

maybe it can you the powar of teh cloudz!!!!

lmao

Avatar image for appariti0n
appariti0n

5193

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#576  Edited By appariti0n
Member since 2009 • 5193 Posts

@commander: see you can't even read benchmarks properly. I don't know why I bother with you.

Userbenchmark takes user submitted benchmarks. At whatever clocks the user who submitted set then at.

Both the 7700k and the 8600k will have a similar ratio of overclocked vs non overclocked benchmarks submitted. Note the best benchmark/worst benchmarks on both sides. Mirroring reality.

Avatar image for commander
commander

16217

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#577  Edited By commander
Member since 2010 • 16217 Posts

@appariti0n said:

@commander: see you can't even read benchmarks properly. I don't know why I bother with you.

Userbenchmark takes user submitted benchmarks. At whatever clocks the user who submitted set then at.

Both the 7700k will have a similar ratio of overclocked vs non overclocked benchmarks submitted. Note the best benchmark/worst benchmarks on both sides. Mirroring reality.

If these are the straws you want to hold on this time, be my guest.

The problem with this site when you want to do a proper comparison is that it isn't in a closed environment, warping the results. The users could be running all sorts of stuff while doing these tests, this while the pc hardware apart from the cpu differs completely.

They don't show the benchmarks seperately as well, what are they using to test all this, lost planet at 480p?

All benchmarks are not done with the same clock speeds either, what does it matter they use a similar ratio for overclocked and not overclocked. We're comparing the I7 7700k and I5 8600k at a high overclock here, the lower clocked benchmarks don't matter.

All this while you have 500 vs 80000 benchmarks?

like I said, you would make statistics professors cry.

Avatar image for appariti0n
appariti0n

5193

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#578 appariti0n
Member since 2009 • 5193 Posts

@commander: what's wrong with lost planet at 480p? Are you forgetting you posted that exact benchmark yourself? Now it's worthless because it proved you wrong when you posted the wrong CPUs? LOL.

But hey, if you wanna believe one random youtube guy is a controlled, statstically sound environment, rather than an aggregate of many users, then be my guest.

And again, I want you to quote me where I said 20% is comparable. You keep making shit up, and then can't prove it.

Avatar image for commander
commander

16217

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#579  Edited By commander
Member since 2010 • 16217 Posts

@appariti0n said:

@commander: what's wrong with lost planet at 480p? Are you forgetting you posted that exact benchmark yourself? Now it's worthless because it proved you wrong when you posted the wrong CPUs? LOL.

But hey, if you wanna believe one random youtube guy is a controlled, statstically sound environment, rather than an aggregate of many users, then be my guest.

And again, I want you to quote me where I said 20% is comparable. You keep making shit up, and then can't prove it.

You will get a lot of benchmarks with older games and older synthetic benchmarks that brings the I7 7700k close to the coffee lake chip because of the hyperthreading.

But games that scale well across 6 cores will take advantage of that extra cpu power, like you've seen in games like watch dogs 2, crysis 3, gta V and this is only going to be with more games in the future.

In the past i5's were dual cores with hyperthreading and they were enough for gaming, till quad cores became mainstream, then I5's became quad cores and left the older dual cores I5 with ht in the dust, the I3's replaced them. Now i3's are quad cores, I5 hexacores and I7 hexacores with ht. The older I7's are not as weak as quad cores without hyperthreading, but they're still weaker than a hexacore.

When you boost the clock speeds to 5 ghz the difference becomes more prevalent and that is what this discussion is about. You want to throw old games into the mix, fine but it warps the results, in the end, there's a 20 percent performance difference.

You didn't think it would be 20 percent, but that's doesn't change the fact that you called it comparable.

Avatar image for appariti0n
appariti0n

5193

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#580 appariti0n
Member since 2009 • 5193 Posts

@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.

K sounds good.

Avatar image for commander
commander

16217

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#581  Edited By commander
Member since 2010 • 16217 Posts

@appariti0n said:

@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.

K sounds good.

games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.

If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?

But even if it doesn't scale well with 8600k the 7700k still gets beat at 5 ghz, and those kind of games/apps will become more rare in the future since the I7 as we knew it has been discontinued. The fact remains that 6 core cpu will become a new standard and that they have more cpu power available.

There's a pattern you can see in the benchies and that is that the performance difference goes from 0 - 20 percent and 20 percent is not an exception. Synthetic benchmarks like cinebench also show 20 percent. The I7 isn't going to come from that, these are not gpu's we're comparing with different architectures and drivers.

Avatar image for ominous_titan
ominous_titan

1217

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#583 ominous_titan
Member since 2009 • 1217 Posts

I love my 1070 still over adequate for most games.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#584 scatteh316
Member since 2004 • 10273 Posts

@commander said:
@appariti0n said:

@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.

K sounds good.

games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.

If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?

There is so much stupid in this post.

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#585 Juub1990
Member since 2013 • 12622 Posts

@ominous_titan said:

I love my 1070 still over adequate for most games.

most?

Avatar image for commander
commander

16217

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#586 commander
Member since 2010 • 16217 Posts

@scatteh316 said:
@commander said:
@appariti0n said:

@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.

K sounds good.

games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.

If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?

There is so much stupid in this post.

it's not because you don't understand it that someone or something else is stupid

Avatar image for crimson_v
Crimson_V

166

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#587 Crimson_V
Member since 2014 • 166 Posts
@scatteh316 said:
@commander said:
@appariti0n said:

@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.

K sounds good.

games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.

If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?

There is so much stupid in this post.

Its not as stupid as you think, the way he explained it isn't great tough.

If you are able to keep the cores fully! busy (with tasks like numerical math and crunching organized data) or if the performance of your main thread is more important then the MC performance that you gain from having HT you are better of without hyper threading, and the reason for that is that the instructions of 2 threads on a single core in the pipeline could end up fighting over resources (having to wait a cycle for the other) like for example due to the lack of cache they could end up evicting each other's memory.

It rarely happens in modern times tough resources are more generously allocated for each thread, the cpu's micro code got better, OS kernels got better and applications are also better coded with HT in mind.

In modern times i highly advise against disabling any modern SMT solution (for an everyday consumer).

Avatar image for tdkmillsy
tdkmillsy

6617

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#588 tdkmillsy
Member since 2003 • 6617 Posts

@Dark_sageX said:

@tdkmillsy: Not like a specific card per say, more in lines with a specific PC spec that includes a specific card, I.e a system that has a GTX 1060 (combined with a CPU that doesn't cause a bottleneck, like an i5), builds with a GTX 1070 tend to also have high end CPUs and other components (although even mid range CPUs don't bottleneck it).

So when i say it the X ONE X matches a GTX 1060 I also mean including an i5 (or equivalent) in benchmarks. and not a GTX 1070.

Fair enough but someone on here will pick on a game that performs better on Xbox One X and claim you are wrong.

I dont think you are a million miles away but there will be games that run closer to the 1070 and some that (more CPU bound) are around 1060.

Being totally vague I'd say its somewhere in between 1060 and 1070 which is quite a big range in itself.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#589 scatteh316
Member since 2004 • 10273 Posts

@crimson_v said:
@scatteh316 said:
@commander said:
@appariti0n said:

@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.

K sounds good.

games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.

If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?

There is so much stupid in this post.

Its not as stupid as you think, the way he explained it isn't great tough.

If you are able to keep the cores fully! busy (with tasks like numerical math and crunching organized data) or if the performance of your main thread is more important then the MC performance that you gain from having HT you are better of without hyper threading, and the reason for that is that the instructions of 2 threads on a single core in the pipeline could end up fighting over resources (having to wait a cycle for the other) like for example due to the lack of cache they could end up evicting each other's memory.

It rarely happens in modern times tough resources are more generously allocated for each thread, the cpu's micro code got better, OS kernels got better and applications are also better coded with HT in mind.

In modern times i highly advise against disabling any modern SMT solution (for an everyday consumer).

In a perfect world..... but Windows core parking and affinity is far from perfect, I remember all the drama it caused when AMD released Bulldozer and module design.

Avatar image for crimson_v
Crimson_V

166

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#590 Crimson_V
Member since 2014 • 166 Posts

@tdkmillsy said:
@Dark_sageX said:

@tdkmillsy: Not like a specific card per say, more in lines with a specific PC spec that includes a specific card, I.e a system that has a GTX 1060 (combined with a CPU that doesn't cause a bottleneck, like an i5), builds with a GTX 1070 tend to also have high end CPUs and other components (although even mid range CPUs don't bottleneck it).

So when i say it the X ONE X matches a GTX 1060 I also mean including an i5 (or equivalent) in benchmarks. and not a GTX 1070.

Fair enough but someone on here will pick on a game that performs better on Xbox One X and claim you are wrong.

I dont think you are a million miles away but there will be games that run closer to the 1070 and some that (more CPU bound) are around 1060.

Being totally vague I'd say its somewhere in between 1060 and 1070 which is quite a big range in itself.

its not between the 1060 and 1070, its more or less equivalent to the 1060 and 480, so its miles away from the 1070

Avatar image for tdkmillsy
tdkmillsy

6617

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#591 tdkmillsy
Member since 2003 • 6617 Posts

@crimson_v said:
@tdkmillsy said:
@Dark_sageX said:

@tdkmillsy: Not like a specific card per say, more in lines with a specific PC spec that includes a specific card, I.e a system that has a GTX 1060 (combined with a CPU that doesn't cause a bottleneck, like an i5), builds with a GTX 1070 tend to also have high end CPUs and other components (although even mid range CPUs don't bottleneck it).

So when i say it the X ONE X matches a GTX 1060 I also mean including an i5 (or equivalent) in benchmarks. and not a GTX 1070.

Fair enough but someone on here will pick on a game that performs better on Xbox One X and claim you are wrong.

I dont think you are a million miles away but there will be games that run closer to the 1070 and some that (more CPU bound) are around 1060.

Being totally vague I'd say its somewhere in between 1060 and 1070 which is quite a big range in itself.

its not between the 1060 and 1070, its more or less equivalent to the 1060 and 480, so its miles away from the 1070

Yes it is

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#593  Edited By scatteh316
Member since 2004 • 10273 Posts

A STOCK RX580 has:

More Tflops then X's GPU

More pixel fill rate then X's GPU

More texture fill rate then X' GPU

And factory overclocked versions have more effective bandwidth then X's GPU (Remember X's GPU has to share bandwidth with it's CPU, something that PC GPU's DON'T do.)

And reading Techpowerups review of the Sapphire RX580 Nitro+ it shows that an OVERCLOCKED RX580 is still 26% slower then a STOCK GTX1070 at 1440p and 29% slower then a STOCK GTX1070 at 4k.

So no..... X's GPU is no where near a GTX1070.....if an OVERCLOCKED RX580 is 29% SLOWER then a STOCK clocked GTX1070 the gap is going to be over 30% when compared to X's GPU.

Avatar image for 04dcarraher
04dcarraher

23858

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#594 04dcarraher
Member since 2004 • 23858 Posts
@scatteh316 said:

And factory overclocked versions have more effective bandwidth then X's GPU (Remember X's GPU has to share bandwidth with it's CPU, something that PC GPU's DON'T do.)

Actually no it does not have more effective memory bandwidth than X1X's gpu. Even at 9ghz GDDR5 the RX 580 bandwidth only reaches to 288 GB/s while the X1X will still have around 300 GB/s out of the 326 GB/s. Highly doubt the rest of the system will use more than 30GB/s at any given time.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#595  Edited By scatteh316
Member since 2004 • 10273 Posts

@04dcarraher said:
@scatteh316 said:

And factory overclocked versions have more effective bandwidth then X's GPU (Remember X's GPU has to share bandwidth with it's CPU, something that PC GPU's DON'T do.)

Actually no it does not have more effective memory bandwidth than X1X's gpu. Even at 9ghz GDDR5 the RX 580 bandwidth only reaches to 288 GB/s while the X1X will still have around 300 GB/s out of the 326 GB/s. Highly doubt the rest of the system will use more than 30GB/s at any given time.

With it's higher clock rate and more performance I would imagine it using more then 30Gb/s peak throughput.

But doesn't the OS calls and little things use up bandwidth too?

Avatar image for xantufrog
xantufrog

17898

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#596 xantufrog  Moderator
Member since 2013 • 17898 Posts

@scatteh316: yeah, how can cpu processes use shared ram SPACE but not bandwidth. Seems like it's gotta be a lot more than 30gb/s.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#597 tormentos
Member since 2003 • 33793 Posts

@tdkmillsy:

The rx480 and 580 dont share its 256gb bandwidth the xbox one x bandwidth is shared.

Avatar image for 04dcarraher
04dcarraher

23858

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#598  Edited By 04dcarraher
Member since 2004 • 23858 Posts

@scatteh316 said:
@04dcarraher said:
@scatteh316 said:

And factory overclocked versions have more effective bandwidth then X's GPU (Remember X's GPU has to share bandwidth with it's CPU, something that PC GPU's DON'T do.)

Actually no it does not have more effective memory bandwidth than X1X's gpu. Even at 9ghz GDDR5 the RX 580 bandwidth only reaches to 288 GB/s while the X1X will still have around 300 GB/s out of the 326 GB/s. Highly doubt the rest of the system will use more than 30GB/s at any given time.

With it's higher clock rate and more performance I would imagine it using more then 30Gb/s peak throughput.

But doesn't the OS calls and little things use up bandwidth too?

RX 580's standard 8ghz GDDR5 sits at 256gb/s while OC'ed to 9ghz goes 288gb/s.

AMD's phenom 2's memory controller only allows 21GB/s while FX 8350 memory conntroller maxes at 29.9...or even AMD flagship Ryzen 1800x. memory controller maxes out at 42.7GB/s.

Everything processed for the OS and the little things has to go through the cpu. So unless X1X cpu has a massive memory controller equal to or better than AMD's Ryzen or Intel cpus, X1X gpu will have abit more memory bandwidth vs a OC'ed RX580.

So lets say X1X's system usage does use 43gb/s at times which would still leave X1X 283gb/s still leaves 27gb/s over stock RX 580 bandwidth.

But even still that extra bandwidth isnt going to give X1X any magic lead over a RX 580. Guru3d tested highly overclocked RX 580 with an extra 210mhz core clock and 9ghz memory vs the factory overclock with 8ghz memory only yielded a 7% avg gain at 1440p.

Avatar image for speedfog
speedfog

4966

Forum Posts

0

Wiki Points

0

Followers

Reviews: 18

User Lists: 0

#599 speedfog
Member since 2009 • 4966 Posts

Who cares, meanwhile the console sells like a beast and is still going strong. A bunch of 4k support and increased graphics. No need for a 4k (you know who you are) the graphical increasement and higher fps is already a good reason to buy it. Especially for 500 bucks

Avatar image for Juub1990
Juub1990

12622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#600 Juub1990
Member since 2013 • 12622 Posts

@04dcarraher: Yeah increasing memory clock speed yields little benefit unless it’s absolutely massive and the gpu is memory bandwidth bound. Otherwise it won’t make a difference.