btk2k2's forum posts

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#1 btk2k2
Member since 2003 • 440 Posts

@evildead6789 said:

You're comparing a i7-4770k with the cpu in ps4/xbox one. You didn't show me 28 percent in cpu bound games, I already replied to you on that and explained it as well. Those benchmarks are not cpu bound. The only cpu stress you can see in your benchmarks is overhead that you don't have in a console. An i3 already has the same frames as an i7 in your benchies.

Your comparison with a gearbox shows that you don't understand what I'm trying to say. Gddr5 has higher latency and higher speed ram always has higher latency. The higher speed doesn't make up for the latency when there's not enough cpu cache to bridge the latency gap.

Explain it again. The test used 4 core Jaguar CPUs running in Windows, that means they are going to be a bigger bottleneck than 8 cores (6 for games) running in a console environment because a) the consoles have more CPU grunt, b) the consoles have lower CPU overhead coming from the OS and c) the APIs in the consoles have lower overhead than DX11 does on PC. Like I said it would have been interesting to compare DX to Mantle benchmarks in the BF4 test to see how much an improvement a lower level API has with such weak CPUs but they did not. What do i3s have to do with the Athlon 5350 and the 5150?

Just a FYI, CPU bound due to extra overhead or CPU bound due to games taking up all the cycles results in the same situation, you are CPU bound.

It is not that I do not understand what you are saying, it is that what you are saying is WRONG. You are saying higher latency in faster ram but it is not the case. It takes more cycles to perform an action so the latency timings are higher but that does not increase overall latency because the clock speed is also higher. Go find some Hynix documentation. By your reasoning DDR3 800 with 6-6-6 timings will have lower latency than DDR3 2133 with 14-14-14 timings and it is not the case as the ACT to ACT time for them is 52.5 ns and 46.09 ns respectively.

You keep saying that faster ram has higher latency, you keep being wrong so why not just sit down, listen, and learn instead of spouting shit you obviously know naff all about. You also keep saying that the 5150 and 5350 were not CPU bound in those tests, yet if that was the case the performance would have matched the faster processors so again you are wrong. You always seem to be wrong and since you are not learning I am done here.

@evildead6789 said:

@tormentos said:

@evildead6789 said:

To me choppy framerates don't mean superior

That is the problem dude the xbox one has even choppier frames,less foliage and effects...lol

not when driving it doesn't and gta is still about stealing cars you know 'GRAND THEFT AUTO'

the x1 version would have been a lot better if it was on 900p on the x1, for the ps4 this wouldn't have mattered, the cpu would still bottleneck.

Oh just give it a rest you little troll. The only time the PS4 drops more frames than the Xbox One is during free driving through heavy intersections and the difference is at best 2 FPS, that was on a single run through though so unless DF, or someone with both copies wants to go through the effort to do a proper test with multiple runs to account for run to run variance then 2 FPS could be an outlier. During the scripted chase sequences where traffic is the same performance is very similar with the PS4 ahead. Further those busy intersections when traversed at night work fine so there is obviously some subtle difference between the versions lighting models, or textures, or draw distance that would cause a slight hiccup on PS4.

When you get to shoot-outs though the PS4 steams along at 30 FPS rock solid and the Xbox One is stuck around 25 FPS, at least it is a consistently poor framerate but it is still poor none the less. This is by far the biggest difference in the game and these frame rates persist even in environments where there is more foliage on the PS4 version so it is keeping a higher frame rate while processing an higher graphical load, that is to be expected when the resolution is a match across versions.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#2 btk2k2
Member since 2003 • 440 Posts

@evildead6789 said:

Well, for starters it's more than a small cpu upclock, it's a 10 percent overclock, a lot of cpu's have less than a 10 percent difference in performance and they're sold as seperate cpu's. It's true that games are mostly gpu bound nowadays but not in the case of the ps4 and x1. The x1 solves it's problems with the weaker gpu by lowering detail settings and/or lowering resolution. With the ps4 this doesn't matter since the cpu is the one doing the bottlenecking, in the latest games anyway, last gen ports it doesn't have this problem because the cpu isn't stressed that much.

Secondly, the latency on the gddr5 will amplify the cpu bottleneck. Some here will say that the gddr5's memory bandwith makes up for the latency, and that is true, but not when the cpu is bottlenecking and not with that amount of cpu cache.The cpu solves the memory latency with cpu cache. The haswell extreme series have 15 mb of cache, and they're running ddr4, it's quad pumped like gddr5; but stil not as fast, it doesn't have such high latency though. The ps4 cpu has 4 mb of cache.

The faster the memory the higher the latency, and the more cpu cache is needed.

9.375% is small when you can OC an i7 4770k by 30% + on air quite easily. I have shown you what a 28% increase in clock speed does for Jaguar in a CPU bound scenario and that is a performance advantage of 10.6%. Assuming an equally CPU bound scenario and similar scaling that would give the Xbox Ones 9.375% clockspeed advantage a real world FPS advantage of 3.5%. That is a tiny boost but it is about the best case scenario for Xbox One. If you think that is a large gap you are deluded. Any gap larger than that in a purely CPU bound scenario is for some other reason, API overhead, optimisation etc etc but not CPU alone. Just like the 720p vs 1080p gap in Fox engine games is due to more than just the GPU as the PS4 GPU is not that much more powerful so there is another issue.

GDDR5 does not have higher latency at the module level and I do know is that going from single channel to 4 channel does not increase latency in Haswell based systems so the increase in memory channels of GDDR5 (8x32bit) vs DDR3 (4x64bit) is unlikely to result in a change in memory latency. That means the only possible location for an increase in latency is going to be in the memory controller and the crossbar, both are Jaguar/GCN APUs and both are going to be very similar to each other, maybe even the same. Think of it in the following way. Latency is your speed in MPH, clockspeed is your RPM and access cycles is your gear. DDR3 runs at a low RPM but a higher gear giving you your speed, GDDR5 runs at a higher RPM but in a lower gear giving you an almost identical speed it just gets there differently. You could even equate fuel usage to bandwidth if you wanted to as running at a higher RPM burns more fuel than a lower RPM and GDDR5 has more memory bandwidth than DDR3.

@Tighaman said:

@tormentos: You cant poor tighaman me homie Ron again is going by what he knows about the GCN and this is based on his knowledge of legacy I'm going to say it again, This is not an AMD GPU that's in the xbox one I have looked far and wide for a DUAL set up like this GPU and I haven't found it. You can find a GPU configuration such as the PS4 but not the xbox one. I understand where Ron is coming from but this is a different Beast but if you apply this to legacy he is correct because not many know whats coming next so we look backwards for answers.

Really? Ok lets say your speculation that the Xbox One GPU is effective 2x 6CU GCN gpus in a single package, that sounds like a crossfire setup and comes with drawbacks vs a single GPU system. If you have multiple GCPs it means you need to use AFR or SFR to render the frame as each set of CUs, ROPs, TUs etc is fed from one GCP. Unless you add a lot of silicone to make it so that they can communicate or send commands to the other CUs it is definitely a crossfire setup.

There are just two flaws with such a set up, the first is that doing it that way uses die space that could have gone to more functional units which would have increased performance at a greater rate than messing around with cross fire, the second is that doing it that way uses die space that could have gone to more functional units which would have increased performance at a greater rate than messing around with cross fire. Now I know technically that is the same flaw but it is such a huge one I thought it was worth mentioning twice.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#3  Edited By btk2k2
Member since 2003 • 440 Posts

@evildead6789 said:

@Krelian-co said:

after the ignorance you have shown in this thread you really think anyone would believe that?

Well people who can read believe it.

Blinded fanboys can't but I didn't write this thread for them in the first place.

Maybe you should reply on topic instead of saying I'm wrong all the time, apparently you don't see some people do agree with me, and it are mostly those who know a thing or two about hardware

What do you know about hardware?

A lot more than you do based on the crap you have been writing.

@evildead6789 said:

@Krelian-co said:

Seriously give it up, no one is buying it, you are way to ignorant to be someone remotely connected to anything hardware related.

Try harder, the people that are not sony fanboys, reviewers, basically anyone that knows basic pc hardware and internet connection disagree with you.

You still didn't give me a single argument.

What is to argue? Than a small CPU upclock has a much lower effect on FPS in games than using a stronger GPU. As I have said if a game is just CPU limited the difference is a bit less than 4%, assuming it never reaches its FPS cap. That is nothing, especially when the PS4 can run at that performance level with higher graphical fidelity.

If you have a gaming PC you can do a simple test. Benchmark some games at your stock CPU and GPU settings and not down the frame rate, then downclock the CPU by 50% and see what effect that has on framerate, then downclock the GPU and Vram by 50% and return the CPU to stock. I am willing to bet that for the majority of games the GPU downclock will show a much greater reduction in frames vs the CPU downclock.

Why is Unity so bad on all systems but especially PS4? The answer is simple, it was an MS co marketing game so the devs would have prioritised the Xbox One version so the could show it off and they ran out of time to do their final optimisation sweeps on all platforms.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#4  Edited By btk2k2
Member since 2003 • 440 Posts

@ronvalencia: I agree with you. I would definitely say that comparing the r7 260 to the r7 265 is pretty much the same as comparing xbox one to the ps4 give or take a few percent.

@evildead6789: They both have dual tesselation units.

Latency of ddr3 and gddr5 is similar and the memory controllers will be designed to balance the needs of the cpu and gpu. This really is not a point of differentiation between them.

The fact you are talking about timings just solidifies your ignorance. Would you say ddr2 is lower latency than ddr3? It does have tighter timings afterall.

EDIT: I read your link and they know sweet fa. Draw calls are tons faster on PS4 due to lower api overhead. What uses an entire cpu core on xbox one cannot even be found in performance traces on ps4. Just read the metro dev interview at digital foundry.

Oles Shishkovstov: Let's put it that way - we have seen scenarios where a single CPU core was fully loaded just by issuing draw-calls on Xbox One (and that's surely on the 'mono' driver with several fast-path calls utilised). Then, the same scenario on PS4, it was actually difficult to find those draw-calls in the profile graphs, because they are using almost no time and are barely visible as a result.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#5 btk2k2
Member since 2003 • 440 Posts

@ronvalencia said:

@

Your link doesn't show R7-260's 12 CU Bonaire. Better link http://www.anandtech.com/show/7754/the-amd-radeon-r7-265-r7-260-review-feat-sapphire-asus/5

From http://www.eurogamer.net/articles/digitalfoundry-2014-r7-260x-vs-next-gen-console

R7-260X vs X1 vs PS4

R7-260 (Bonaire 12 CU) would be similar to X1.

Eurogamer's PCs with R7-260X has Intel Core i7-3770 or AMD FX-6300. Intel Core i7-3770 is slightly faster than AMD FX-6300 (6 core at 3.5Ghz with 4.1Ghz Turbo).

Looking at the R7 260 specs it would be a bit faster than the Xbox One GPU because of the clock speed but not too much, maybe around 5% or so. It sits below the 7790 and that is only around 10% faster than the Xbox One GPU so defiantly the correct ball park.

Based on the Anandtech link the 265 is 39% faster than the 260, exactly on the low end of my spectrum for the advantage the PS4 GPU has over the Xbox One GPU. I would say the 265 is a bit closer to the PS4 than the 260 is to the Xbox One but we are talking 1-2 % differences at most so not really worth arguing over those sorts of differences.

I would say the i7 3770 is a lot faster than the FX-6300 but at 1080p with this level of GPU they are probably GPU bound more than anything so it will not make much difference for gaming benchmarks.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#6 btk2k2
Member since 2003 • 440 Posts

@tormentos said:

@ronvalencia said:

@tormentos:

Having better tessellation doesn't just disappear i.e. the title could have higher geometry with less shader or this particular graphics pipeline stage would be completed faster on high preforming tessellation device when compared to slower tessellation device. For X1 vs PS4's case, X1 only has about 5.8 percent advantage(1) over PS4 on this graphics stage.

On the shader stages, PS4 has 28 percent advantage (2)

1. (1 - (1.6/1.7)) x 100 = 5.88 percent for X1.

2. (1 - (1.31/1.84)) x 100 = 28.8 percent for PS4.

Proper first party games would be programmed against the hardware's strengths.

In terms of total graphics stages, PS4 has the advantage in most cases..

For 60 fps target with shader power budget

X1 = 21 MFLOP per frame

PS4: 30 MFLOP per frame

At 60 fps target, PS4 can apply better shaders than X1 i.e. additional 9 MFLOP per frame budget.

For 30 fps target with shader power budget

X1 = 42 MFLOP per frame

PS4: 60 MFLOP per frame

7970 GE: 136 MFLOP per frame.

R9-290X: 187 MFLOP per frame. <-------- can run 3 1920x1080p screens with PS4's shader budget per frame.

At 30 fps target, PS4 can apply better shaders than X1 i.e. additional 18 MFLOP per frame budget.

This topic is about CPU bottlenecks and Ubisoft is pretty clear about this issue. Sony should have bumped up the CPU clock speed.

So you still trying to make the xbox one look better.? Tessellation is a demanding effects,hitting 1080p on the xbox one is costly more so than the PS4 ever be,so any game reaching 1080p can have better tessellation on PS4 than on xbox one,regardless of the xbox one having little higher clock speed,because it demand resources which on xbox one would be already tied to 1080p and trying to keep either 60 or 30 FPS.

The only way i see that been effective is if the xbox one version runs at a much lower resolution so it has resources to spare,other wise the xbox one GPU is to weak for the job.

http://www.anandtech.com/bench/product/549?vs=536

7770 vs 7850 scroll down and see both tessellation test and tell me again the xbox one will beat the PS4 in tessellation just because it has higher clock speed,which the 7770 has over the 7850.

To be fair the 7770 can only do 1 tri/clock but the 7850, the Xbox One and the PS4 can all do 2 tris/clock.

The Xbox One does have a slight edge in triangle set up rate but it is a very minor advantage and it is not substantial enough for the Xbox One to use a higher level of tesselation if it is used in a game.

Now ron I did not delete your post as I do want to respond to it as well regarding the CPU bottlenecking issue. Unity is defiantly a CPU heavy game but the problem is that the performance differences between the console versions are much greater than the CPU performance differences. Further for ubisoft to be producing a game that hammers the CPU in the way it does and then fail to fully optimise it or to take advantage of GPU compute to free up some CPU runtime is just poor decision making. I am sure if they had enough time to optimise it fully it would run at a fairly stable 30 FPS on both systems with the PS4 version having plenty of GPU resources left unused.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#7 btk2k2
Member since 2003 • 440 Posts

@evildead6789 said:

I pretty much agree with what you said on the comparison of gpu's there, apart from memory bandwith. The fact that the hd 7850 has 153 gb/s memory bandwith and that the hd 7870 has exactly the same memory bandwith shows that 176 gb/s is simply overkill especially if you know that this same memory is also used as system ram. Maybe you're right they can't maximise it with the 32 rops allthough I don't really know how much that would increase performance on the gpu (I would have to research that, and frankly i'm too lazy right now, i'll get back to you about that later). Still that doesn't change the fact that this hardly does anything for system ram especially when the cpu is bottlenecking.

In all honesty I was being daft regarding memory bandwidth on the 7850/7870. Sure it would take 250 ish GB/s of bandwidth to maximise the ROP performance but you only see the benefits of that above 1080p and the shader performance of those cards is not there so it becomes unbalanced.

Lets just look at some cards and the consoles to see if we can get any rough idea on expected GPU performance differences.

77907850R7 2657870Xbox OnePS4
Memory Bandwidth (GB/s)96153.6179.2153.668 -> ~190176
Pixel Fill (Vantage Pixel Fill)57.98.97.9~3.54 -> ~6.3~ 8 -> ~ 8.7
Texture Fill (Vantage Texture Fill)51.150.254.172.1~ 37.3~ 52.5
TFlops1.791.761.892.561.311.84

Going from 7790 -> 7850 where only the memory bandwidth and the pixel fill rate is higher gave us a 25% increase in FPS.

Going from 7850 -> 7870 where only texture fill, shaders and setup rate is higher gave us an 18.5% increase in FPS.

Going from 7790 -> 7870 where everything is higher gave us a 48.4% increase in FPS.

Going from 7790 -> R7 265 where everything is higher (although some more than others) gives a 41% increase in FPS

As you can see the Xbox One is weaker than the 7790 except for memory bandwidth and Pixel FIll. This is similar to comparing the R7 265 to the 7870 as here the R7 265 is weaker than the 7870 except for memory bandwidth and Pixel Fill but unlike the 7790 to 7850 comparison the R7 265 is slower than the 7870. The advantage the 7870 has in Texturing and Shaders is 33% and 35% respectively vs the R7 265. The advantage the 7790 has in Texturing and Shaders is 37% and 37% respectively vs the Xbox One. If we take the average memory bandwdth and average pixel fill advantage of the Xbox One to be the same as the advantage the R7 265 has over the 7870 the numbers come out at 111.4 GB/s and 5.65 Gpix/s on Xbox One. The 7870 is 10% faster on average than the R7 265 which using these figures would be very similar to the advantage the 7790 has over the Xbox One GPU if it was a PC part.

The PS4 is a bit easier as at its worst memory bandwidth and Pixel fill is a match for the 7850 with the texturing and shaders being better, at its best it is faster across the board. As we saw with 7870 vs 7850 increasing shader and texture performance with the amount of bandwidth and pixel fill performance they have does result in an increase in performance. Based on the 7870 to 7850 comparison the PS4 GPU will at worst perform 2% faster than the 7850 if it were a PC GPU.

That means the rough performance spectrum is PS4 (2% min - 6.7% max) > 7850 (25%) > 7790 (10%) > Xbox One. That puts the PS4 GPU between 40 and 46% faster than the Xbox One GPU. This is the expected difference in any GPU limited scenario and is borne out by the number of 900p vs 1080p games there are where FPS is basically the same on both platforms.

As my other posts have shown in CPU limited scenarios a 4 core Jaguar at 2.05 Ghz can only beat a 4 core Jaguar at 1.6 Ghz by 10.6%. That would mean that the best case scenario for Xbox One is that the game is CPU limited at parity settings on PS4 and maxing out the CPU on Xbox one, that would give the Xbox One an approximate 4% FPS advantage over the PS4 if the frame rate was unlocked, the API overhead was roughly the same and the level of optimisation for each version was roughly the same. In that scenario though the PS4 is capable of increasing the graphics settings a good amount over the Xbox One version due to the increase GPU capability without impacting frame rate due to the CPU limited nature of the scenario.

In reality of course there are API overhead differences, there are optimisation differences and there are frame rate caps in place. In general this should mean the FPS gap is basically 0 with the PS4 enjoying a better graphical presentation. There will be outliers, fox engine games / unity, where the level of optimisation across the versions is different which swings the difference further than the expected numbers above. In the fox engine examples the PS4 has a > 100% resolution advantage over Xbox One and also enjoys some extra effects. In the Unity example the Xbox One has a consistent 20-25% fps advantage over the PS4 version. In the former example it will be related to optimisation of ESRAM usage where with enough effort they can probably hit 900p. In the latter the game is just a buggy mess in need of final optimisation passes.

Considering how close the CPUs in each console perform I find it very hard to imagine cases where one is CPU limited and the other is not. Given that the console have frame rate caps on them I also find it hard to believe that any dev would design their game where it was CPU limited below the cap for any sustained period of time, the odd hiccup or sequence that might be CPU limited I can see, but sustained lows due to CPU limitations will only come about due to poor optimisation.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#8  Edited By btk2k2
Member since 2003 • 440 Posts

@evildead6789 said:

Well a lot of people see the ps4 gpu as 40 percent or even 50 percent faster but experts say the performance of the x1's gpu is that of the hd 7790, an opinion that I share simply because more shader cores and memory bandwith don't necessarily translate completely into performance like I already explained with the hd 7870 xt and the hd 7950.

You don't have to be a genius to see that the memory bandwith on the ps4 is complete overkill, the vastly stronger 7870 xt has about the same bandwith, yet it performs a lot better. It even performs just as good as the hd 7950 which has a staggering 240 gb/s of memory bandwith. So what good is 180 gb/sec for a measily hd 7850 especially when you pair it with such a weak cpu.

The difference between a hd 7790 and a hd 7850 is not that big but it's still significant and because of that the ps4 is managing to dish out better resolution but not 50 percent better (or even 40 percent better resolution). The ps4 did manage to do that in the beginning, but now things are different, they get about 30 percent better resolution at best, but the framerates suffer and that is because the cpu is bottlenecking and not all devs use that extra gpu power to counter that bottleneck.

The 7790 has 1.79 TFlops and the 7850 has 1.76 TFlops. The big difference between the 7790 and the 7850 is that the 7850 has more memory bandwidth and more pixel fillrate performance. So despite the shader performance deficit and the texture performance deficit a 56.7 % increase in memory bandwidth and a 58% increase in pixel fillrate performance (benchmarked not theoretical due to how bandwidth bound pixel fillrate is) gives a 25% increase in FPS on average.

The Xbox One has 1.31 TFlops and the PS4 has 1.84 TFlops while also having more memory bandwidth and pixel fillrate. Based on the above the increase in memory bandwidth and pixel fillrate should give you a ~ 25% performance increase at 1080p on their own with the shader advantage giving additional performance on top of this. Sometimes a game is fillrate/bandwidth bound and in those cases it will be closer to a 25% advantage which is why some games that are 900p on Xbox One perform a bit better than they do on PS4 when running at 1080p. However in shader heavy games this advantage can stretch to 40% or more.

1080p is 44% higher resolution than 900p. Some games are now going for horizontal only scaling but 1080p is still 33% higher than 1440x1080 used in Far Cry 4 and the use of HRAA in that game means the higher resolution version is using more samples for the AA increasing IQ even further. In this game the frame rates are well within margin of error with very little to separate them.

@evildead6789 said:

The more cpu-heavy games will become, the more the ps4 will suffer if they don't translate gpu power into cpu power. It's as simple as that, a chain is always as strong as it's weakest link. The x1 may have weak cpu and gpu, but the ps4 has a low to moderate gpu paired with an even weaker cpu and way too much memory bandwith.

Games will not become more CPU bound. There is a very fine line between being CPU bound on PS4 but not on Xbox One and being CPU bound on both and devs are not going to straddle this line. There might be small sections, or hiccups that are CPU bound on both but these will be minor and as can be seen by the benchmarks which are totally CPU bound the 9% clock speed advantage will give less than a 5% FPS advantage in real world scenarios. With the lower API overhead, stronger CPU and weaker GPUs in the consoles compared to those benchmarks it is likely to be even less than this especially if the frame rate is > 30 (or 60) the majority of the time like in FC4, DA:I. The Crew, GTA:V etc etc etc.

The PS4 also does not have enough memory bandwidth to maximise the usage of all 32 of its ROPs. Pixel fill rate is very closely aligned with memory bandwidth and it takes 250+ GB/s of bandwidth to max out 32 ROPs with GCN 1.1.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#10 btk2k2
Member since 2003 • 440 Posts

@evildead6789 said:

@neatfeatguy said:

@evildead6789 said:

@neatfeatguy said:

I don't understand your reasonsing that XB1 is better over the PS4 when it comes to power. You claim that PS4 has drops, is laggy in GTA5 in places XB1 isn't.....then again people say XB1 is laggy in places it isn't on PS4.....

You keep trying to compare that the CPU is the deciding factor that makes XB1 better. Yet, both systems run with different graphic configurations (this is clearly noticeable from pics when you compare games on the XB1 to the PS4). Unless you have both systems running the exact same specs (resolution and graphic settings) - all you're doing is comparing apples to oranges.

So do this and then come back to us with the results. Contact a developer, have them take a game (GTA5 for example) and hardset graphic settings and resolution to be applied to the XB1 and PS4. Then have them benchmark both and release the results - clearly they won't do this, but you can certainly ask. Until this happens, you cannot say XB1 is more powerful over PS4. You have two systems, with different hardware specs running games that have different graphic settings and sometimes across different resolutions.

I never said the xbox1 was better, i said it was more balanced and that the ps4 isn't stronger than the x1. The x1 has better cpu speed, the ps4 has more memory bandwith and shader cores.

The ps4 has idd better graphic configurations, but the x1 has smoother framerates especially with recent games. The reason is that newer games become more cpu intensive. Last gen games were already gpu intensive but that's something you can easily counter by lowering resolutions, AA, lighting .

Gpgpu tools can solve this problem for the ps4 , but that doesn't leave them with headroom anymore in the gpu departement, so it's clear as day that the systems will grow to each other over time.

True, you may not have said that the XB1 is better - but you seem to be on a warparth to try and drill it into people's heads with all the ranting and raving that's been going on through this entire thread - therefore it's easy to draw a conclusion from you that XB1 is better.

In the end, you still need both systems to run exact same graphic and resolution settings to determine whicn one is more powerful. XB1 has settings reduced in GTA5, when compared to what the PS4 allows the game to run with. Once the developers don't "tweak" settings for a game to make it run better on the XB1 or PS4 and allow the the settings to be universal across both consoles, you have no evidence to support whether XB1 or PS4 is more powerful. All you can do is argue hardware specs.

Taking a game and running it at medium/high settings at 1920x1080 with an i3-4340 using a 7770 and then comparing the same game using an i3-4330 using a 7850 and running on all high settings at 1920x1080 - it's not the same. Both hardware configurations should have the game ran with the same settings across the board to get an idea of what the performance difference would be to see if a faster CPU/slower GPU is better/worse over the slower CPU/faster GPU.

Well, that's because a lot of people keep on saying that the ps4 is a lot stronger than the x1. While it is true that the gpu is stronger the cpu isn't and the gpu numbers are completely warped. IF you look at the shader cores only , yes then idd the gpu on the ps4 is 50 percent stronger but when comparing tflops a lot of people don't realize that the esram performance isn't measured in tflops, it isn't even calculated into the whole tflops comparison.

I already gave an example with the hd 7870 xt and the hd 7950 , both are comparable in performance, yet the hd 7950 has about 17 percent more shader cores and 25 percent more memory bandwith. The 7870 xt solves it with higher clock speeds and higher power consumption. The differences are higher between the xboxone and ps4's gpu (50 percent more shader cores and more than double the bandwith), but the xboxone gpu has esram, which doesn't close the gap completely of course, it closes the gap for about 50 percent, since the gap is 50 percent, the ps4 is about 25 percent stronger.

But that doesn't do much good when you have a cpu bottleneck, you migh be able to run the game at higher resolution, it's the cpu that will determine the framerates.

As for your example , the xboxone is faster than a hd 7770 and an i3-4340 is faster than a xboxone gpu. In benchmarks they always use the same settings, otherwise it wouldn't be a benchmark. There are no benchmarks available that use the same (or comparable) cpu/gpu configurations as the xboxone and ps4, since those configurations bottleneck but the benchmarks that i provided do show what a cpu bottleneck is.

The ps4's cpu is comparable to a i3-530 in performance. Everyone that knows a thing or two about pc hardware knows that this cpu will bottleneck in recent and upcoming games.

As per the link in the above post which I will also share again here the console CPUs are miles behind even the bottom of the pile i3's. The only game they come close in is Tomb Raider which is obviously GPU limited as the difference there is margin of error. I find it much more likely that with the console APIs and with the fact they use weaker GPUs than the 7970 they are more likely going to be GPU bound as these chips were in the Tomb Raider benchmark.

The PS4 GPU is roughly 40% faster, overall than the Xbox One GPU. That is taking into account the memory subsystem, the ROPs, the shaders, tesselation performance and texturing performance. That is usually translated to a higher resolution such as 1080p vs 900p which does use most of the additional rendering budget available but in some cases you also get improved effects as well.