btk2k2's forum posts

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#1  Edited By btk2k2
Member since 2003 • 440 Posts

@evildead6789 said:

@neatfeatguy said:

@evildead6789 said:

@achilles614 said:

ITT the op makes a good point how you can't see the benefit of a stronger GPU if the CPU bottlenecks it...then most of the replies consist of people with an IQ of 2 that don't understand principles of computers.

Sorry OP, it's hard dealing with morons.

yeah idd

it's comparable as pairing an i3 530 with a hd 7850 (ps4) and pairing an i3 550 with a hd 7790 (xbox one), that i3 550 is about 10 percent stronger and since it's bottlenecking it will determine the framerates. What does it matter if you can achieve a bit higher resolution with the hd 7850, that i3 530 will still bottleneck (more)

What sony does is solve this bottleneck with gpgpu tools but in the end it just gives you the same results. However not everyone uses these gpgpu tools. Ubisoft used them and that's why ac unity is running in the same resolution, and because of the cpu bottleneck (gpgpu tools can't solve everything, the ai has to run on the cpu) , the framerates are still a bit smoother on the xboxone. Other games like far cry and call of duty run at higher resolution than on the xboxone, but again the xboxone has smoother framerates because of the faster cpu.

Thx for your input because the cpu bottleneck is crystal clear and it has been mentioned by reviews (that are comparing games on both systems) as well. It's also obvious to anyone that knows a thing or two about pc hardware

I don't understand your reasonsing that XB1 is better over the PS4 when it comes to power. You claim that PS4 has drops, is laggy in GTA5 in places XB1 isn't.....then again people say XB1 is laggy in places it isn't on PS4.....

You keep trying to compare that the CPU is the deciding factor that makes XB1 better. Yet, both systems run with different graphic configurations (this is clearly noticeable from pics when you compare games on the XB1 to the PS4). Unless you have both systems running the exact same specs (resolution and graphic settings) - all you're doing is comparing apples to oranges.

So do this and then come back to us with the results. Contact a developer, have them take a game (GTA5 for example) and hardset graphic settings and resolution to be applied to the XB1 and PS4. Then have them benchmark both and release the results - clearly they won't do this, but you can certainly ask. Until this happens, you cannot say XB1 is more powerful over PS4. You have two systems, with different hardware specs running games that have different graphic settings and sometimes across different resolutions.

I never said the xbox1 was better, i said it was more balanced and that the ps4 isn't stronger than the x1. The x1 has better cpu speed, the ps4 has more memory bandwith and shader cores.

The ps4 has idd better graphic configurations, but the x1 has smoother framerates especially with recent games. The reason is that newer games become more cpu intensive. Last gen games were already gpu intensive but that's something you can easily counter by lowering resolutions, AA, lighting .

Gpgpu tools can solve this problem for the ps4 , but that doesn't leave them with headroom anymore in the gpu departement, so it's clear as day that the systems will grow to each other over time.

Except when you look at jaguar level CPUs being used on the PC with a dGPU you get this. What that is showing is that across 6 games a 28.125% clock speed advantage averages out at a 10.6% FPS advantage when CPU limited. The caveats are that this is done with 4 core Jaguar CPUs (not 8 as in the consoles), it is being paired with a 7970 which is substantially more powerful than the GPUs found in the consoles making CPU limited scenarios a more common occurrence and the API overhead and optimisation level is exactly the same for both CPUs. Even if you ignore those facts though it means that the 9.375% clock speed advantage the Xbox One has would translate to an average 3.6% FPS advantage in similar scenarios. Nothing to really write home about in the end is it? The only real scenario where the Xbox One will receive a decent performance advantage over the PS4 is the scenario where the Xbox One had more optimisation done.

The only games where the Xbox One is smoother are AC:U, which is a total mess and DA:I where it runs at 900p vs 1080p. Most other games the performance is basically the same on both and in a few the PS4 had both a graphical advantage and an FPS advantage.

It would be really interesting to compare the above CPUs in games that are running mantle, that would give a better picture of what the CPU bottlenecking looks like on consoles as their APIs are closer to Mantle in terms of overhead than they are DX11.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#2 btk2k2
Member since 2003 • 440 Posts

@ronvalencia said:
@clr84651 said:

PS4's RAM > X1's

PS4's GPU > X1's

PS4's CPU = X1's CPU

You enjoy your X1. It's OK if it doesn't have as high of hardware as PS4

RAM: PS4 > X1, In general, PS4's solution is superior.

GPU programmable: PS4 > X1 i.e. 1.84 TF with 8 ACE vs 1.31 TF with 2 ACE. PS4's 8 ACE maxmises GpGPU hardware usage.

GPU raster: PS4 > X1 i.e. PS4's 32 ROPS is imited by memory bandwdith which is about 24 ROPS vs X1's 17 ROPS effective (16 ROPS at 853Mhz ~= 17 ROPS at 800Mhz). X1's dual graphics command units maxmises GPU's graphics related hardware usage and reduce CPU's GPU graphics context overheads/management issues, but this doesn't change the CU and ROPS limits. Multi-threaded CPU to GPU submission can maxmise GPU's graphics related hardware usage i.e. AMD Mantle enables this feature on the AMD's PC GCN side. PS4 has Mantle like APIs hence DirectX 12 is a yawn.

GPU tessellation: X1 > PS4 i.e. X1's minor 1.7 billion triangles per second vs PS4's 1.6 billion triangles per second.

CPU: X1 > PS4 i.e. X1's minor 853 Mhz vs 800Mhz

**On AMD GCN ISA, CU scalar processor can take over for some of the CPU related GPU mangement workloads (Ref 1), but this feature is not covered by DirectX or OpenGL.

Ref 1 http://timothylottes.blogspot.com.au/2013/08/notes-on-amd-gcn-isa.html as an example

DX and GL are years behind in API design compared to what is possible on GCN. For instance there is no need for the CPU to do any binding for a traditional material system with unique shaders/textures/samplers/buffers associated with geometry. Going to the metal on GCN, it would be trivial to pass a 32-bit index from the vertex shader to the pixel shader, then use the 32-bit index and S_BUFFER_LOAD_DWORDX16 to get constants, samplers, textures, buffers, and shaders associated with the material. Do a S_SETPC to branch to the proper shader.

This kind of system would use a set of uber-shaders grouped by material shader register usage. So all objects sharing a given uber-shader can get drawn in say one draw call (draw as many views as you have GPU perf for). No need for traditional vertex attributes either, again just one S_SETPC branch in the vertex shader and manually fetch what is needed from buffers or textures...

There are more than one way to minimise the CPU.

AMD Mantle driven Radeon HD 7750 CrossFire with two graphics command units doesn't beat Mantle driven with one large graphics command unit Radon HD 7970 Ghz Edition.

Radeon HD R9-290's and R9-285 's graphics command unit (GCU) has to drive four tessellation units hence the GCU would be scaled up accordingly.

Both X1 and PS4's GCNs has two tessellation units.

For once Ron I cannot disagree with anything you have said above. Minor correction on the CPU clockspeed though as it is 1.75 Ghz vs 1.6 Ghz but as you said it is a minor bump. I could maybe argue that the Xbox One ROPS will only be the effective equivalent of 17 on the PS4 if it is pulling the data from ESRAM. If it needs to grab the data from the system ram then it is also bandwidth limited but again this is a very minor thing.

In all to think the Xbox One is the PS4s equal in terms of horsepower is a delusional. Yes it has a minor CPU clock speed advantage which in a like for like situation will only yield the tiniest of advantages in certain scenarios. The issue is though at 1080p the GPUs in both consoles are too weak to really put the burden on the CPU so in all but the most edge cases games will be GPU bound not CPU bound. Further in those edge cases that are CPU bound you very rarely end up with 100% clock speed scaling, unless you are starcraft 2, in which case the 9.375% bump in clock speed would not yield a 9.375% performance advantage.

This shows that the Athlon 5350 with a 7970 can only outperform the 5150 with a 7970 by around 10 % despite the fact it has a 28.125% clock speed advantage. Now the 5350 is a 4 core Jaguar APU running at 2.05Ghz, the 5150 is a 4 core Jaguar APU running at 1.6 Ghz and the 7970 is a much faster GPU than what is in either the PS4 or the Xbox One so what conclusions can we draw.

1) At best a 28.125% clock speed increase sees a 21% increase in average frame rate. This occured in Company of Heroes which is an RTS game and these are known for being very CPU intensive just like 4x strategy games. On average the advantage is around 10.8%

2) The 7970 is a much more powerful GPU than the PS4 GPU and it would have been interesting to see if these were just as CPU limited with an R7 265 which is much closer to the PS4 in terms of performance metrics.

3) The Jaguar CPUs are miles slower than even a low end i3. This means to maximise the consoles use of GPU Compute is an absolute must on both pieces of hardware.

4) This means, on average in part CPU, part GPU bound scenarios which are the most likely to occur a 1% clock speed advantage results in a 0.384% FPS increase.

Based on the above in you would expect the Xbox One to outperform the PS4 by around 3.6% in part CPU, part GPU limited scenarios. How that averages out over an entire game, or an entire level though I do not know and it will depend on CPU usage throughout but I doubt any game, or any level, is CPU limited all of the time. It may be in specific parts but in general it will be GPU limited with the occasion part being CPU limited. This is also assuming the API overhead and optimisation is the same for both platforms and that is also unlikely to be the case. This gap might be a bit bigger in specific areas where the Xbox One API has less overhead than the PS4 API or where the Xbox One received more optimisation than the PS4 and the gap might be smaller in the reverse scenario.

Ultimately a 9.375% clock speed advantage does not mean much and will not lead to any meaningful advantage on the Xbox One.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#3 btk2k2
Member since 2003 • 440 Posts

@04dcarraher said:

@btk2k2: @RossRichard: @tormentos:

lol, processing, texture, and pixel rates do not add up in that way you guys are thinking. In basic math yes 1.3 vs 1.8 is a 40 % difference, 25.6 GP is 88% more then 13.6GP etc. However, the correct way to show performance difference is to take what one can can do and take the difference. 1.3 is 71% of 1.8 hence the 30% faster performance ,13.6 GPix/s is 53% of 25.6 GPix/s, which is the reason for 47% faster performance , and for the texture rate 40 vs 57 is 57 is 40% more then 40, however 40 of 57 is 70% of the performance hence the 30% faster performance.

A prime example with the 7850 vs 7870 1.76 vs 2.56 TFLOPS. You would say its 45% faster and its texture rate 80 vs 55 would be 45% , and pixel rate 32 vs 27.5 17%, When 1.76 is 68% of 2.56, hence 32% performance difference, 55 is 62% of 80 hence 38% faster performance , and 27.5 is 85% of 32. so 15% performance difference. End results for these gpu's only with an average difference of 15-20% with performance. We can even take a 7790 vs 7850 1.79 vs 1.76 TFLOP and texture rate of 56 vs 55, and with pixel rate and see that 7850 has 72% difference 16 vs 27.5 , but yet in reality 7790 is only 25% slower on average at same settings and resolutions . then we can take it further with a 7770 and 7850 , 1.28 vs 1.76 TFLOP a near 50% difference 40 vs 55 texture rate 36% difference, and 16 vs 27.5 pixel rate again is 72% difference. Only yields a average difference of 45%.

The PS4 gpu is not 50% faster then the X1 its roughly around 35-40% faster overall at best.

Wrong.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#4 btk2k2
Member since 2003 • 440 Posts

@Tighaman: They are both GCN 1.1. I do not need to look at a diagram to calculate bandwidth you just do math. Memory clock * bus width / 8 = bandwidth in bytes per second. 5.5 * 256 / 8 = 176 GB/s. Sure ESRAM has no contention but it is tiny so even though it is fast it cannot do everything so the gpu will be hitting the slow main ram a lot.

SHAPE was confirmed as being for kinect and voice recognition. There is no advantage in the sound systems as they are both equal for game purposes.

One gpu used as 2 or 2 used as one is the same damn thing and there ia no way that is how the gpu is designed. It is a single 12 cu design. So you can use shaders to do back end work? I think not as it is a different stage in the pipeline.

You know next to nothing about this and you just spout the shit mr x media dribbles out. If you want to learn look at gcn deepdives done at anandtech as that is a good start. If you just want to believe fantasy land bollocks then carry on.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#5 btk2k2
Member since 2003 • 440 Posts

@Tighaman said:

@btk2k2:

You people don't read just look at the surface

Ps4 has 8 aces each ace can do 8 que. each =64que

Xbox One has 2 compute processors 2 graphic processors that can do 16que each =64 que

Ps4 has 18CU with 130gb BW total for the system if the CPU need some its goes lower to only 110gb BW just for the GPU

Xbox One has 12CU 150gb for the GPU and 50gb for the CPU with no need to fighting over Bandwidth.

It takes 225bw to totally saturate 12CU for graphics that means ps4 is wasting ALU trying to use 18CU just for Graphics hence 14+4 will never have enough bandwidth to use 18CU just for graphic without Asyn Compute.

Ps4 has 1.8tfs GPU that uses resources for CPU offloading, In Game Music, Video Compressing/Decompressing, Color compressing/ Decompressing.

Xbox One has a GPU split into TWO GPU so I don't know the total Tflops but I do know that its has different processors for CPU offloading, SHAPE, Swizzle Textures, Compressing and Decompressing all with its own Cache and bandwidth

You all on this forum is not telling me nothing I don't already know the reason they using dx12 because you can not target one gpu at a time or using processors individually. I'm not telling you anything from a blog or these forums, this is stuff I have been hearing for the makers of the consoles.

The Xbox One has 2 ACE units which has can store 2 commands each giving a total of 4 stored commands for compute.

Based on the die shots it only has 1 command processor for issuing rendering commands but if it does have 2 that is a waste of die space that could have been used for something else.

The PS4 has 176 GB/s of memory bandwidth, this drops to 156 GB/s of bandwidth if the CPU is using its full 20GB/s allocation. This is about the same as the 7850 and the 7870 with the shader and ROP performance being closer to that of the 7850 than the 7870.

The Xbox One DDR3 has 68 GB/s of bandwidth and the ESRAM has upto 190 GB/s of bandwidth but in most scenarios it tops out around 120 GB/s. The CPU can use upto 30 GB/s of the DDR3 bandwidth.

SHAPE is for kinect audio decoding and has nothing to do with game audio, both consoles are likely using TruAudio that is built into GCN 1.1 for their in game audio..

If it is so difficult to target multiple command processor then it would stop SLI or Xfire from working. The fact these technologies work just fine (most of the time) suggests that this is not actually a problem and if the Xbox Is based around a 2x 6CU 8 ROP design it just seems rather daft as that will not perform any better than a 1x 12CU 16 ROP design and would probably perform worse as you never see 100% scaling when going from single GPU to dual GPU. Further it would mean that you need to store each asset twice so each 'card' could use it which would be a huge memory hog. There is no logical reason, or evidence, to suggest this is the case and it would be detrimental to performance if it were.

You might read this stuff but it does not make it true, or likely, or logical. I am not telling you anything from a blog, or forums, I am telling you stuff based on the die shots of the APUs and my knowledge of GCN architecture.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#6 btk2k2
Member since 2003 • 440 Posts
@04dcarraher said:

@RossRichard said:

1.8 teraflops is 50% more than 1.2. Not that hard of a concept to grasp.

That is not how you compare gpu's, 1.2 TFLOP is 66% the processing power of 1.8. However they did gained back usage from the Kinect so its actually back at 1.3 TFLOPS. So in reality at best its around 30%. Then when you texture rates and pixel rates which shows the PS4 has 47% more pixel fill rate 13.6 GPix/s vs 25.6 GPix/s. But as for texture fill rate is about 30% difference 40 vs 57. So all in all PS4 is roughly around 35-40% faster not 50% as many claim.

You suck at maths.

60 is 100% more than 30.

30 is 50% of 60.

1.84 TF is 40.5% more than 1.31 TF

1.31 TF is 71.2% of 1.84 TF

25.6 Gpix/s is 88.2% more than 13.6 GPix/s

13.6 GPix/s is 53.1% of 25.6 GPix/s

Interesting thing about Pixel fillrate is that it is bandwidth limited in most cases. On Xbox One it can maximise ROP performance only if it is getting the data from ESRAM. If it is getting it from the DDR3 the actual real world difference is a lot more in favour of the PS4.

@Tighaman said:

@tormentos:

Them quotes and that diagram is from the IEEE journal of John Sells the maker of the XBOX ONE and he explaining it to the IEEE you made look at your face I see it now its hilarious. You still don't get it do you of course t see anything with DUAL GRAPHIC CONTEXT BECAUSE ITS NOT ONE lol. THIS IS TRUE HSA. Fast Cache, Microprocessors, DSP, GRAPHIC CONTEXT SWITCHING.

So you are saying that multi GPU systems running SLI or Xfire will not work until DX12 because of multiple command processors? Also you do realise that the compute command processor is in-fact the ACE unit, of which the PS4 has 8 compared to the Xbox Ones 2. Why waste die space with multiple contexts when you could use it for extra shaders. It is just a waste of resources and I do not buy it because the die shot of the Xbox One APU just shows a standard GCN config based on 14 CUs and 4 render backends. If you want to believe in fairy tails though then fine, go ahead.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#7 btk2k2
Member since 2003 • 440 Posts
@scottpsfan14 said:

But let me ask you this, would you prefer 900p with the same effects and foliage as PS4, or 1080p and how it currently is? I's say the former tbh.

I would prefer the later. 1080p improves the quality in every frame of the game. The extra grass, while a nice bonus to those who have a PS4 is not worth dropping the resolution for, especially when in other areas the games look almost identical.

With how it is the Xbox One looks anywhere from the same, to somewhat worse depending on where you are in the map. If they had chosen 900p the Xbox One version would look a bit worse all of the time, all relative to the PS4 of course.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#8 btk2k2
Member since 2003 • 440 Posts

@sts106mat said:

@GrenadeLauncher said:

@sts106mat said:

LOL thread shutdown

NPC number ruled out.....still likely to be a CPU issue. but your thread title / OP doesn't say that does it?

next time you feel the need to make a thread, ya might wanna think twice jospeh.

Is this your attempt at ownage, Simon? Ubisoft insisted that crowd sizes were the reason for the resolution and framerate choice. They insisted it was a CPU-heavy game, hence those settings for the crowd sizes. Lemmings insisted that the PS4 was holding back the Xbone because of its uber-better CPU with a whole .15 more GHz of powah. Now it turns out that, no, it wasn't like that. At all. It's rushed, unoptimised code. Like we all said it was.

err, the problem is still CPU, what part of that doesn't make sense? They may have thought the crowd was the issue (it'd be most peoples thoughts at first) but have they come out and said they were wrong?

So

There is still a difference between the X1 and the PS4, and it's likely a CPU issue......

once again, your thread title and OP have been blown out.........

The worst thing about you is that you cannot ever admit to being wrong......even @lglz1337 has more credibilty than you in my book right now

Lets say you have a box that can store a certain amount of stuff in it. If you throw the stuff in the box haphazardly and you end up filling it up before you have put all of your stuff in can you blame the box because you did not bother to maximise the space?

This is pretty much the issue, the CPU can handle the workload but Ubisoft have not optimised the code to enable it to so it sits idle or it is doing stuff it does not need to because it has not been optimised, this will affect all versions of the game and hopefully once they do optimise it they can hit a much steadier 30 FPS on both the Xbox One and the PS4.

As has been previously stated many times the 9.4% CPU advantage the Xbox One has is not enough, on its own, to explain why it is getting a 20% increase in FPS.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#9 btk2k2
Member since 2003 • 440 Posts

I had this but I just logged into the profile attached to my US account and the date reset back to the US launch date and when I went back to my main account it was fine.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

1

Followers

Reviews: 0

User Lists: 0

#10  Edited By btk2k2
Member since 2003 • 440 Posts

There is more going on than just the CPU speed difference here. The FPS is around 20% higher on the Xbox One but the CPU is only 9% faster on the Xbox One meaning that the CPU alone does not account for the difference.

The most likely explanation based on PC and console benchmarks is that they developed the game with Nvidia Gameworks and then ported it over to the Xbox One, fixed some issues (but not all) and then did a quick port to the PS4 but did not do any performance tuning.

In general the game is just a mess and it needed another 3 to 6 months of bug fixing and optimisation before it was released but Ubi wanted a next gen AC game out for the holidays so they released it in the currently unfinished state.