This thread is awesome, please keep going :D I can't really add much, but I really enjoy reading it :)
This thread is awesome, please keep going :D I can't really add much, but I really enjoy reading it :)
@commander:
|
so it's equivalent to an RX 480 meaning its more than 50% behind the 1070
@commander:
|
so it's equivalent to an RX 480 meaning its more than 50% behind the 1070
Xbox One X gpu has
more shading units
more tmu's
more floating point
more memory
much more memory bus
and much more bandwidth
How is it equivalent to a 480
https://www.techpowerup.com/gpudb/2848/radeon-rx-480
At least you stated what you think it is, others just say its not this without sticking their neck out and say what it is.
sorry but there's no proof the 1070 is 20 percent better at this time.
If they uncap the frame rate or give a detail analysis sure, we can dispute that. I already said it was pointless to compare games with overhead because the frame rate on the X1X is seldom uncapped making comparisons impossible.
@commander:
|
so it's equivalent to an RX 480 meaning its more than 50% behind the 1070
Xbox One X gpu has
more shading units
more tmu's
more floating point
more memory
much more memory bus
and much more bandwidth
How is it equivalent to a 480
https://www.techpowerup.com/gpudb/2848/radeon-rx-480
At least you stated what you think it is, others just say its not this without sticking their neck out and say what it is.
It's around an RX480..... 'Much more bandwidth' as you put it is stretching it as Xbone has to share it with the whole system.
And if you look at benchmarks comparing a Vega 56 vs Vega 64 at the same clocks you'll find there's nothing in it despite Vega 64's shader advantage.
Same as other AMD GPU's.... 290 vs 290x.... 7950 vs 7970..... the top tier cards have never really been faster at the same clocks as the second tier cards and certainly no where near as much as the paper specs would imply as AMD's GPU are terribly inefficient.
At best you can call Scorpio's GPU equal to an RX480 with a mild overclock.
@waahahah: We have to wait until we get the whole rundown. As it stands we don’t know how the X1X version stacks up up not to mention the 1070 still performs better regardless. It’s 20% better.
sorry but there's no proof the 1070 is 20 percent better at this time.
This^, you pointed out 20% and literally said we can't because one of them is capped. Its closer to the 1070 than the 1070 is to the 1080 in this case. And we aren't talking about a world of difference either. I don't expect it to be exact because AMD doesn't have as much processing power but but its so close you seem to have a big issue.
I actually couldn't find a 1060 bench mark at 4k ultra. Seems to be its more than 20% away... Although the RX580 does 47fps... at least in one test. Are these 1070 numbers right?
https://www.youtube.com/watch?time_continue=72&v=ozrnN82sUG4
sorry but there's no proof the 1070 is 20 percent better at this time.
If they uncap the frame rate or give a detail analysis sure, we can dispute that. I already said it was pointless to compare games with overhead because the frame rate on the X1X is seldom uncapped making comparisons impossible.
NX Gamer ran Witcher 3 unpatched on the xbox one x at 1080p. It was able to hit 60fps at times but when in combat or riding through a town, it dipped into the 40's and 30's. The GTX 1070 can do 1080p at high settings at an average of 80+ fps and ultra hovers around 60.
I think DF did something similar to that with AC Unity. Had the same issues, 60fps but never stuck to it when the scenes got intense.
This^, you pointed out 20% and literally said we can't because one of them is capped. Its closer to the 1070 than the 1070 is to the 1080 in this case. And we aren't talking about a world of difference either. I don't expect it to be exact because AMD doesn't have as much processing power but but its so close you seem to have a big issue.
I actually couldn't find a 1060 bench mark at 4k ultra. Seems to be its more than 20% away... Although the RX580 does 47fps... at least in one test. Are these 1070 numbers right?
https://www.youtube.com/watch?time_continue=72&v=ozrnN82sUG4
Then why say it's close when it isn't? The comparison is obviously invalid. Would the X1X average 31? 32fps? Is Depth of Field set to Ultra? Very High? Would it lose 2-3-4 fps if everything was set to Ultra?
RX 580 is what I think the X1X is equivalent to. Right between a 1060 and a 1070. I had previously said between a 980 and a 1070 which is basically the same thing. I expected it to be about 15-20% slower than a GTX 1070 which so far seems to be right where it falls.
According to techpowerup the difference at 1080p between the RX 580 and GTX 1070 is 42% but as we go up in the resolution, the gap gets smaller, much smaller in fact. I don't think the gap between the X1X and 1070 is that big but I also don't think the X1X can compete with a 1070.
sorry but there's no proof the 1070 is 20 percent better at this time.
If they uncap the frame rate or give a detail analysis sure, we can dispute that. I already said it was pointless to compare games with overhead because the frame rate on the X1X is seldom uncapped making comparisons impossible.
NX Gamer ran Witcher 3 unpatched on the xbox one x at 1080p. It was able to hit 60fps at times but when in combat or riding through a town, it dipped into the 40's and 30's. The GTX 1070 can do 1080p at high settings at an average of 80+ fps and ultra hovers around 60.
I think DF did something similar to that with AC Unity. Had the same issues, 60fps but never stuck to it when the scenes got intense.
it will not do that with a similar cpu like the xboxone x
the witcher 3 actually ran better on my I52500 and a gtx 950 than on my i3 4170 and gtx 970.
I can get higher resolution sure with the gtx970, but the framerate will always be worse than the I52500 and gtx 950 at low medium settings.
sorry but there's no proof the 1070 is 20 percent better at this time.
If they uncap the frame rate or give a detail analysis sure, we can dispute that. I already said it was pointless to compare games with overhead because the frame rate on the X1X is seldom uncapped making comparisons impossible.
NX Gamer ran Witcher 3 unpatched on the xbox one x at 1080p. It was able to hit 60fps at times but when in combat or riding through a town, it dipped into the 40's and 30's. The GTX 1070 can do 1080p at high settings at an average of 80+ fps and ultra hovers around 60.
I think DF did something similar to that with AC Unity. Had the same issues, 60fps but never stuck to it when the scenes got intense.
You're talking about CPU power there because that was the Witcher unpatched at a low resolution.
@Xplode_games: the unpatched version had a dynamic scaler that went between 900p and 1080p. The X allows the scaler to hit 1080p consistently. There are drops outside of town and combat areas as well. They're just not as severe as when you're in a town.
This^, you pointed out 20% and literally said we can't because one of them is capped. Its closer to the 1070 than the 1070 is to the 1080 in this case. And we aren't talking about a world of difference either. I don't expect it to be exact because AMD doesn't have as much processing power but but its so close you seem to have a big issue.
I actually couldn't find a 1060 bench mark at 4k ultra. Seems to be its more than 20% away... Although the RX580 does 47fps... at least in one test. Are these 1070 numbers right?
https://www.youtube.com/watch?time_continue=72&v=ozrnN82sUG4
Then why say it's close when it isn't? The comparison is obviously invalid. Would the X1X average 31? 32fps? Is Depth of Field set to Ultra? Very High? Would it lose 2-3-4 fps if everything was set to Ultra?
RX 580 is what I think the X1X is equivalent to. Right between a 1060 and a 1070. I had previously said between a 980 and a 1070 which is basically the same thing. I expected it to be about 15-20% slower than a GTX 1070 which so far seems to be right where it falls.
According to techpowerup the difference at 1080p between the RX 580 and GTX 1070 is 42% but as we go up in the resolution, the gap gets smaller, much smaller in fact. I don't think the gap between the X1X and 1070 is that big but I also don't think the X1X can compete with a 1070.
From a viewer stand point, its effectively the same when there isn't a cpu bottleneck. Do you think any one cares about 6 fps average? That doesn't include dips. from a nitpicking standpoint there are some differences but when its GPU bound its pretty dam close, and in those cases the end user probably won't notice the difference.
20% is close, I dont' think the 1060 is matching it at that resolution. Although I tried to find a bench mark.
RX 580 is matched pretty equally with the 1060 in a lot of bench marks. Which is why I thought to look that up. My point about the RX 580 being at 47fps its better than the 1080 bench mark i posted.
@tdkmillsy:
@commander:
|
so it's equivalent to an RX 480 meaning its more than 50% behind the 1070
Xbox One X gpu has
more shading units
more tmu's
more floating point
more memory
much more memory bus
and much more bandwidth
How is it equivalent to a 480
https://www.techpowerup.com/gpudb/2848/radeon-rx-480
At least you stated what you think it is, others just say its not this without sticking their neck out and say what it is.
The RX 480 has 4 less CUs, and yet most commercially sold RX 480's have the same floating point performance as the 1X's gpu.
https://www.techpowerup.com/gpudb/2848/radeon-rx-480
This site lists the founders edition which has a lower clockspeed then most other commercially available cards (and even the founder card is just 0.16 GFLOPS behind the 1X's gpu).
The wider memory bus would definitive be a + but sadly its shared memory and the CPU takes up some of that bandwidth and cycles.
@Xplode_games: the unpatched version had a dynamic scaler that went between 900p and 1080p. The X allows the scaler to hit 1080p consistently. There are drops outside of town and combat areas as well. They're just not as severe as when you're in a town.
You have to also account for that the game was not codded with the Xbox One X in mind. Yes it can scale and do all of that, but when the game was not specifically coded for a certain platform you will not get all of the performance out of the system. It is like boost mode on the Pro, yes games run better but when the are specifically coded to work with the Pro you will get better performance out of it.
@Xplode_games: the unpatched version had a dynamic scaler that went between 900p and 1080p. The X allows the scaler to hit 1080p consistently. There are drops outside of town and combat areas as well. They're just not as severe as when you're in a town.
You have to also account for that the game was not codded with the Xbox One X in mind. Yes it can scale and do all of that, but when the game was not specifically coded for a certain platform you will not get all of the performance out of the system. It is like boost mode on the Pro, yes games run better but when the are specifically coded to work with the Pro you will get better performance out of it.
Fair point.
@Xplode_games: the unpatched version had a dynamic scaler that went between 900p and 1080p. The X allows the scaler to hit 1080p consistently. There are drops outside of town and combat areas as well. They're just not as severe as when you're in a town.
The X1X at 1080p is CPU limited, everyone knows that.
@commander: Quote me where I said a 20% performance difference is comparable.
You're the one who pulled the 20% number out of your ass, when in reality the 8600K is currently sitting at a 5% delta from the 7700K in average gaming performance. Not just your cherry picked titles.
Also good job completely ignoring the fact that you can't even keep straight what cpus are being compared, and posted charts with the wrong cpu twice.
So much for your supposed 20% difference. Or is 478 benchmarks now still not a large enough sample size?
So we had
1060
1070
1080
480
580
various OC versions of each
benchmarks to show its closer to one card or the other dependant on the game.
Developers claiming one thing or another before and after launch
Professionals (DF for example) claiming its like different cards for different games
Seems practically impossible to accurately say its exactly like a specific PC card, which makes sense cos its not.
If you said its a match for the 1070 for ALL games then yes you should say sorry.
But so should all the others that have claimed it like a specific card.
@tdkmillsy: Not like a specific card per say, more in lines with a specific PC spec that includes a specific card, I.e a system that has a GTX 1060 (combined with a CPU that doesn't cause a bottleneck, like an i5), builds with a GTX 1070 tend to also have high end CPUs and other components (although even mid range CPUs don't bottleneck it).
So when i say it the X ONE X matches a GTX 1060 I also mean including an i5 (or equivalent) in benchmarks. and not a GTX 1070.
@Xplode_games: wow.....just.....wow! Dude, get out of your mothers basement and get a job. Better yet, get a girlfriend. A woman's touch will go along way to help you to relax. Your mothers touch does not count. ;) The X is an amazing console. Will it beat out a dedicated gaming system? Nope. Will it come close to matching the experience? Yup. Get a Life.
@commander: Quote me where I said a 20% performance difference is comparable.
You're the one who pulled the 20% number out of your ass, when in reality the 8600K is currently sitting at a 5% delta from the 7700K in average gaming performance. Not just your cherry picked titles.
Also good job completely ignoring the fact that you can't even keep straight what cpus are being compared, and posted charts with the wrong cpu twice.
So much for your supposed 20% difference. Or is 478 benchmarks now still not a large enough sample size?
there are too much cherries there to call it cherry picked. synthetic benchmarks show exactly what the 8600k is , a cpu with 50 percent more cores.
your random benchmark site crap isn't going to change that. Worst bench turbo speed 4.02 percent lmao. I5 8600k goes to 5 ghz on air.
@commander: userbenchmark.com is random shit, but your youtube video with shitty trance music isn't? LOL.
Lemme guess, tomshardware and anandtech are probably just some random shit sites as well? Same with eurogamer.net? Except of course when you tried to use it for your own evidence and got owned.
good lord you reminder me of an anti-vaxxer.
Still waiting for you to quite where I said 20% is comparable too.
@commander: userbenchmark.com is random shit, but your youtube video with shitty trance music isn't? LOL.
Lemme guess, tomshardware and anandtech are probably just some random shit sites as well? Same with eurogamer.net? Except of course when you tried to use it for your own evidence and got owned.
good lord you reminder me of an anti-vaxxer.
Still waiting for you to quite where I said 20% is comparable too.
well it's the only benchmark that compares the two overclocked so unless you got any other benchmarks you got no case.
@commander: see you can't even read benchmarks properly. I don't know why I bother with you.
Userbenchmark takes user submitted benchmarks. At whatever clocks the user who submitted set then at.
Both the 7700k and the 8600k will have a similar ratio of overclocked vs non overclocked benchmarks submitted. Note the best benchmark/worst benchmarks on both sides. Mirroring reality.
@commander: see you can't even read benchmarks properly. I don't know why I bother with you.
Userbenchmark takes user submitted benchmarks. At whatever clocks the user who submitted set then at.
Both the 7700k will have a similar ratio of overclocked vs non overclocked benchmarks submitted. Note the best benchmark/worst benchmarks on both sides. Mirroring reality.
If these are the straws you want to hold on this time, be my guest.
The problem with this site when you want to do a proper comparison is that it isn't in a closed environment, warping the results. The users could be running all sorts of stuff while doing these tests, this while the pc hardware apart from the cpu differs completely.
They don't show the benchmarks seperately as well, what are they using to test all this, lost planet at 480p?
All benchmarks are not done with the same clock speeds either, what does it matter they use a similar ratio for overclocked and not overclocked. We're comparing the I7 7700k and I5 8600k at a high overclock here, the lower clocked benchmarks don't matter.
All this while you have 500 vs 80000 benchmarks?
like I said, you would make statistics professors cry.
@commander: what's wrong with lost planet at 480p? Are you forgetting you posted that exact benchmark yourself? Now it's worthless because it proved you wrong when you posted the wrong CPUs? LOL.
But hey, if you wanna believe one random youtube guy is a controlled, statstically sound environment, rather than an aggregate of many users, then be my guest.
And again, I want you to quote me where I said 20% is comparable. You keep making shit up, and then can't prove it.
@commander: what's wrong with lost planet at 480p? Are you forgetting you posted that exact benchmark yourself? Now it's worthless because it proved you wrong when you posted the wrong CPUs? LOL.
But hey, if you wanna believe one random youtube guy is a controlled, statstically sound environment, rather than an aggregate of many users, then be my guest.
And again, I want you to quote me where I said 20% is comparable. You keep making shit up, and then can't prove it.
You will get a lot of benchmarks with older games and older synthetic benchmarks that brings the I7 7700k close to the coffee lake chip because of the hyperthreading.
But games that scale well across 6 cores will take advantage of that extra cpu power, like you've seen in games like watch dogs 2, crysis 3, gta V and this is only going to be with more games in the future.
In the past i5's were dual cores with hyperthreading and they were enough for gaming, till quad cores became mainstream, then I5's became quad cores and left the older dual cores I5 with ht in the dust, the I3's replaced them. Now i3's are quad cores, I5 hexacores and I7 hexacores with ht. The older I7's are not as weak as quad cores without hyperthreading, but they're still weaker than a hexacore.
When you boost the clock speeds to 5 ghz the difference becomes more prevalent and that is what this discussion is about. You want to throw old games into the mix, fine but it warps the results, in the end, there's a 20 percent performance difference.
You didn't think it would be 20 percent, but that's doesn't change the fact that you called it comparable.
@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.
K sounds good.
@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.
K sounds good.
games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.
If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?
But even if it doesn't scale well with 8600k the 7700k still gets beat at 5 ghz, and those kind of games/apps will become more rare in the future since the I7 as we knew it has been discontinued. The fact remains that 6 core cpu will become a new standard and that they have more cpu power available.
There's a pattern you can see in the benchies and that is that the performance difference goes from 0 - 20 percent and 20 percent is not an exception. Synthetic benchmarks like cinebench also show 20 percent. The I7 isn't going to come from that, these are not gpu's we're comparing with different architectures and drivers.
@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.
K sounds good.
games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.
If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?
There is so much stupid in this post.
@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.
K sounds good.
games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.
If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?
There is so much stupid in this post.
it's not because you don't understand it that someone or something else is stupid
@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.
K sounds good.
games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.
If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?
There is so much stupid in this post.
Its not as stupid as you think, the way he explained it isn't great tough.
If you are able to keep the cores fully! busy (with tasks like numerical math and crunching organized data) or if the performance of your main thread is more important then the MC performance that you gain from having HT you are better of without hyper threading, and the reason for that is that the instructions of 2 threads on a single core in the pipeline could end up fighting over resources (having to wait a cycle for the other) like for example due to the lack of cache they could end up evicting each other's memory.
It rarely happens in modern times tough resources are more generously allocated for each thread, the cpu's micro code got better, OS kernels got better and applications are also better coded with HT in mind.
In modern times i highly advise against disabling any modern SMT solution (for an everyday consumer).
@tdkmillsy: Not like a specific card per say, more in lines with a specific PC spec that includes a specific card, I.e a system that has a GTX 1060 (combined with a CPU that doesn't cause a bottleneck, like an i5), builds with a GTX 1070 tend to also have high end CPUs and other components (although even mid range CPUs don't bottleneck it).
So when i say it the X ONE X matches a GTX 1060 I also mean including an i5 (or equivalent) in benchmarks. and not a GTX 1070.
Fair enough but someone on here will pick on a game that performs better on Xbox One X and claim you are wrong.
I dont think you are a million miles away but there will be games that run closer to the 1070 and some that (more CPU bound) are around 1060.
Being totally vague I'd say its somewhere in between 1060 and 1070 which is quite a big range in itself.
@commander: So this 20% number you've pulled out of your ass is from games that scale well across 6 threads. But not 8. And no older games allowed. And no newer games that don't scale to 6 threads. And any benchmark posted that disagrees with said number is statistically unsound, and a random youtube guy is ironclad.
K sounds good.
games that scale well across 6 threads ( or 8) and games that scale well accross 6 cores is a different matter.
If games take into account that 2 threads have to be used with the same core then that's going to be a performance penalty for the 6 core, since there will be power that will be left untapped. The 6 core doesn't use any hyperthreading.There's a reason some games run worse with hyperthreading, you want to use those games as well for your benchies?
There is so much stupid in this post.
Its not as stupid as you think, the way he explained it isn't great tough.
If you are able to keep the cores fully! busy (with tasks like numerical math and crunching organized data) or if the performance of your main thread is more important then the MC performance that you gain from having HT you are better of without hyper threading, and the reason for that is that the instructions of 2 threads on a single core in the pipeline could end up fighting over resources (having to wait a cycle for the other) like for example due to the lack of cache they could end up evicting each other's memory.
It rarely happens in modern times tough resources are more generously allocated for each thread, the cpu's micro code got better, OS kernels got better and applications are also better coded with HT in mind.
In modern times i highly advise against disabling any modern SMT solution (for an everyday consumer).
In a perfect world..... but Windows core parking and affinity is far from perfect, I remember all the drama it caused when AMD released Bulldozer and module design.
@tdkmillsy: Not like a specific card per say, more in lines with a specific PC spec that includes a specific card, I.e a system that has a GTX 1060 (combined with a CPU that doesn't cause a bottleneck, like an i5), builds with a GTX 1070 tend to also have high end CPUs and other components (although even mid range CPUs don't bottleneck it).
So when i say it the X ONE X matches a GTX 1060 I also mean including an i5 (or equivalent) in benchmarks. and not a GTX 1070.
Fair enough but someone on here will pick on a game that performs better on Xbox One X and claim you are wrong.
I dont think you are a million miles away but there will be games that run closer to the 1070 and some that (more CPU bound) are around 1060.
Being totally vague I'd say its somewhere in between 1060 and 1070 which is quite a big range in itself.
its not between the 1060 and 1070, its more or less equivalent to the 1060 and 480, so its miles away from the 1070
@tdkmillsy: Not like a specific card per say, more in lines with a specific PC spec that includes a specific card, I.e a system that has a GTX 1060 (combined with a CPU that doesn't cause a bottleneck, like an i5), builds with a GTX 1070 tend to also have high end CPUs and other components (although even mid range CPUs don't bottleneck it).
So when i say it the X ONE X matches a GTX 1060 I also mean including an i5 (or equivalent) in benchmarks. and not a GTX 1070.
Fair enough but someone on here will pick on a game that performs better on Xbox One X and claim you are wrong.
I dont think you are a million miles away but there will be games that run closer to the 1070 and some that (more CPU bound) are around 1060.
Being totally vague I'd say its somewhere in between 1060 and 1070 which is quite a big range in itself.
its not between the 1060 and 1070, its more or less equivalent to the 1060 and 480, so its miles away from the 1070
Yes it is
A STOCK RX580 has:
More Tflops then X's GPU
More pixel fill rate then X's GPU
More texture fill rate then X' GPU
And factory overclocked versions have more effective bandwidth then X's GPU (Remember X's GPU has to share bandwidth with it's CPU, something that PC GPU's DON'T do.)
And reading Techpowerups review of the Sapphire RX580 Nitro+ it shows that an OVERCLOCKED RX580 is still 26% slower then a STOCK GTX1070 at 1440p and 29% slower then a STOCK GTX1070 at 4k.
So no..... X's GPU is no where near a GTX1070.....if an OVERCLOCKED RX580 is 29% SLOWER then a STOCK clocked GTX1070 the gap is going to be over 30% when compared to X's GPU.
And factory overclocked versions have more effective bandwidth then X's GPU (Remember X's GPU has to share bandwidth with it's CPU, something that PC GPU's DON'T do.)
Actually no it does not have more effective memory bandwidth than X1X's gpu. Even at 9ghz GDDR5 the RX 580 bandwidth only reaches to 288 GB/s while the X1X will still have around 300 GB/s out of the 326 GB/s. Highly doubt the rest of the system will use more than 30GB/s at any given time.
And factory overclocked versions have more effective bandwidth then X's GPU (Remember X's GPU has to share bandwidth with it's CPU, something that PC GPU's DON'T do.)
Actually no it does not have more effective memory bandwidth than X1X's gpu. Even at 9ghz GDDR5 the RX 580 bandwidth only reaches to 288 GB/s while the X1X will still have around 300 GB/s out of the 326 GB/s. Highly doubt the rest of the system will use more than 30GB/s at any given time.
With it's higher clock rate and more performance I would imagine it using more then 30Gb/s peak throughput.
But doesn't the OS calls and little things use up bandwidth too?
And factory overclocked versions have more effective bandwidth then X's GPU (Remember X's GPU has to share bandwidth with it's CPU, something that PC GPU's DON'T do.)
Actually no it does not have more effective memory bandwidth than X1X's gpu. Even at 9ghz GDDR5 the RX 580 bandwidth only reaches to 288 GB/s while the X1X will still have around 300 GB/s out of the 326 GB/s. Highly doubt the rest of the system will use more than 30GB/s at any given time.
With it's higher clock rate and more performance I would imagine it using more then 30Gb/s peak throughput.
But doesn't the OS calls and little things use up bandwidth too?
RX 580's standard 8ghz GDDR5 sits at 256gb/s while OC'ed to 9ghz goes 288gb/s.
AMD's phenom 2's memory controller only allows 21GB/s while FX 8350 memory conntroller maxes at 29.9...or even AMD flagship Ryzen 1800x. memory controller maxes out at 42.7GB/s.
Everything processed for the OS and the little things has to go through the cpu. So unless X1X cpu has a massive memory controller equal to or better than AMD's Ryzen or Intel cpus, X1X gpu will have abit more memory bandwidth vs a OC'ed RX580.
So lets say X1X's system usage does use 43gb/s at times which would still leave X1X 283gb/s still leaves 27gb/s over stock RX 580 bandwidth.
But even still that extra bandwidth isnt going to give X1X any magic lead over a RX 580. Guru3d tested highly overclocked RX 580 with an extra 210mhz core clock and 9ghz memory vs the factory overclock with 8ghz memory only yielded a 7% avg gain at 1440p.
Who cares, meanwhile the console sells like a beast and is still going strong. A bunch of 4k support and increased graphics. No need for a 4k (you know who you are) the graphical increasement and higher fps is already a good reason to buy it. Especially for 500 bucks
Please Log In to post.
Log in to comment