ESRAM Faster Than MS Thought!

  • 121 results
  • 1
  • 2
  • 3

This topic is locked from further discussion.

Avatar image for Caseytappy
Caseytappy

2199

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#51 Caseytappy
Member since 2005 • 2199 Posts

 

Sony still needs that killer app.

kingoflife9

 

Are you talking about actual games ? LOL

It's all about specs and resolution now in Sytem Wars because the PS4 seems to be as powerfull as a low/midrange PC and the Xbone a low end, the few actual next gen. games are all rediculed because of shaders, resolution, lightning, lod or whatever .

The Cows seem to have turned in to pseudo tech heads drunken from that small advantage in power and the forum has become a new Toms Hardware .

 

Gamewise it all looks pretty bland though :(

Avatar image for monsterpuncher
monsterpuncher

177

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#53 monsterpuncher
Member since 2013 • 177 Posts
"Xbox One is weaker and it's a pain to use its ESRAM,  -from developer working on xbone game LOL
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#54 ronvalencia
Member since 2008 • 29612 Posts

"Xbox One is weaker and it's a pain to use its ESRAM, -from developer working on xbone game LOLmonsterpuncher

http://gearnuke.com/microsoft-surprised-xbox-ones-esram-backlash-calls-esram-evolution-xbox-360s-edram/

This controversy is rather surprising to me, especially when you view as ESRAM as the evolution of eDRAM from the Xbox 360. No-one questions on the Xbox 360 whether we can get the eDRAM bandwidth concurrent with the bandwidth coming out of system memory. In fact, the system design required it, explains Andrew Goosen.

...

The Xbox 360 was the easiest console platform to develop for, it wasnt that hard for our developers to adapt to eDRAM, but there were a number of places where we said, gosh, it would sure be nice if an entire render target didnt have to live in eDRAM and so we fixed that on Xbox One where we have the ability to overflow from ESRAM into DDR3, so the ESRAM is fully integrated into our page tables and so you can kind of mix and match the ESRAM and the DDR memory as you go From my perspective its very much an evolution and improvement a big improvement over the design we had with the Xbox 360

Avatar image for monsterpuncher
monsterpuncher

177

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#56 monsterpuncher
Member since 2013 • 177 Posts
Basically this is the same old "esram is the special sauce" rumor Its bullshit, and its been debunked a 100 times over now When anybody uses this "argument" you know they are a Microsoft shill. Disregard them instantly. The ESRAM on the xbone is a band aid solution. Its not secret sauce. Its not going to offer infinity power...teh powa of the cloud (my god how blatantly despicable Microsoft's marketing is) The GGDR5 unified ram set up in the PS4 out classes the DDR+esram set up in the xbone
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#57 ronvalencia
Member since 2008 • 29612 Posts

Basically this is the same old "esram is the special sauce" rumor Its bullshit, and its been debunked a 100 times over now When anybody uses this "argument" you know they are a Microsoft shill. Disregard them instantly. The ESRAM on the xbone is a band aid solution. Its not secret sauce. Its not going to offer infinity power...teh powa of the cloud (my god how blatantly despicable Microsoft's marketing is) The GGDR5 unified ram set up in the PS4 out classes the DDR+esram set up in the xbone monsterpuncher

There's nothing special with having very fast small memory with a large slower memory pool i.e. an extreme asymmetric multi-memory controller setup.


My Samsung ATIV Book 7 Ultrabook has 2 GB (soldered on-board)+ 8 GB (SO-DIMM) dual-memory controller asymmetric setup.

X1's "ESRAM is fully integrated into our page tables" basically says 32 MB ESRAM and DDR3 memory pools are unified i.e. memory pools are divided into pages in modern unix type OS.

-----

Xbox 360's EDRAM is limited by

1. 8 ROPS (which is similar to RSX's 8 ROPS limits)

2. the pipe connected between the GPU and embedded memory i.e. about 32 GB/s.

3. render target's inability to spill over GDDR3 memory pool.

Avatar image for monsterpuncher
monsterpuncher

177

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#58 monsterpuncher
Member since 2013 • 177 Posts
The facts [Quote=""]So currently: Xbone: 1.18 TF GPU (12 CUs) for games Xbone: 768 Shaders Xbone: 48 Texture units Xbone: 16 ROPS Xbone: 2 ACE/ 16 queues PS4: 1.84TF GPU ( 18 CUs) for games + 56% PS4: 1152 Shaders +50% PS4: 72 Texture units +50% PS4: 32 ROPS + 100% PS4: 8 ACE/64 queues +400% Looks unbalanced to me

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#59 ronvalencia
Member since 2008 • 29612 Posts

The facts [Quote=""]So currently: Xbone: 1.18 TF GPU (12 CUs) for games Xbone: 768 Shaders Xbone: 48 Texture units Xbone: 16 ROPS Xbone: 2 ACE/ 16 queues PS4: 1.84TF GPU ( 18 CUs) for games + 56% PS4: 1152 Shaders +50% PS4: 72 Texture units +50% PS4: 32 ROPS + 100% PS4: 8 ACE/64 queues +400% Looks unbalanced to memonsterpuncher

Incomplete facts.

X1 GPU Triangle rate: 1.7 billion per second

PS4 GPU Triangle rate: 1.6 billion per second

------------

http://www.edge-online.com/news/gaijin-games-on-why-war-thunder-isnt-coming-to-xbox-one/

How much more powerful?

AY: It depends what youre doing. GPU, like 40 per cent more powerful(1). DDR5 is basically 50 per cent more powerful than DDR3, but the memory write [performance] is bigger on Xbox One so it depends on what youre doing.

How is that going to translate to on-screen results for the kinds of games you want to make? So to optimise War Thunder on both consoles you could hypothetically make a better, prettier version on PS4?

AY: Yep.

KY: Probably yes. But again, thats not a very big deal.

-------------

1. PS4 has it's own GPU reservations.

2. PS4's 32 ROPS would not fully used since it's memory bandwidth gimp it i.e. 7970 (~264 GB/s memory bandwidth)'s 32 ROPS says Hi. Fill rates works with memory writes and it would stupidity on your part to think ROPS operates in isolation from memory bandwidth. AMD has updated Radeon HD R9-290X with 44 ROPS with ~320 GB bandwidth.

Read and write 16 ROPS can fill +150 GB/s. At 1080p, most games are not ROPS bound, hence why GPUs like Geforce 660 Ti can get away with it.

3. On multiple ACE issue.

http://www.tomshardware.com/reviews/radeon-hd-7970-benchmark-tahiti-gcn,3104-2.html

The CU has its own hardware scheduler that's able to assign wavefronts to available VUs with limited out-of-order capability to avoid dependency bottlenecks

computeunit_zpsa9e97df2.jpg

97997892.jpg

AMD GCN's CU can process multiple Kernels at once.

This is the key to better compute performance because it gives each VU the ability to work on different wavefronts if a dependency exists in the queue

Compute-unit-dependency-handling.jpg

http://www.amd.com/us/Documents/GCN_Architecture_whitepaper.pdf

In GCN, each SIMD unit is assigned its own 40-bit program counter and instruction buffer for 10 wavefronts. The whole CU can thus have 40 wavefronts in flight, each potentially from a different work-group or kernel, which is substantially more flexible than previous designs. This means that a GCN GPU with 32 CUs, such as the AMD Radeon HD 7970, can be working on up to 81,920 work items at a time.

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/4

However the CU and SIMDs can select a different wavefront to work on; this can be another wavefront spawned by the same task (e.g. a different group of pixels/values) or it can be a wavefront from a different task entirely.

Avatar image for Krelian-co
Krelian-co

13274

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#60 Krelian-co
Member since 2006 • 13274 Posts

In short ps4>>>xbone. Accept it already.. That's not gonna changeblamix99

ron wont admit it, hes still arguing about the secret sauce, even when microsoftthemselves said the sauce is a lie.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#61 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="blamix99"]In short ps4>>>xbone. Accept it already.. That's not gonna changeKrelian-co

ron wont admit it, hes still arguing about the secret sauce, even when microsoftthemselves said the sauce is a lie.

There's nothing to admit and there's no secret sauce.

Again, my 7970's 32 ROPS says Hi.

My posts support both PS4 and X1.

http://www.videogamer.com/news/xbox_one_and_ps4_have_no_advantage_over_the_other_says_redlynx.html

Speaking to VideoGamer.com at E3, Ilvessuo said: " Obviously we have been developing this game for a while and you can see the comparisons. I would say if you know how to use the platform they are both very powerful. I don't see a benefit over the other with any of the consoles."

----

http://www.videogamer.com/xboxone/metal_gear_solid_5_the_phantom_pain/news/ps4_and_xbox_one_power_difference_is_minimal_says_kojima.html

"The difference is small, and I don't really need to worry about it," he said, suggesting versions for Xbox One and PS4 won't be dramatically different.

----

http://gamingbolt.com/ubisoft-explains-the-difference-between-ps4-and-xbox-one-versions-of-watch_dogs

"Of course, the Xbox One isnt to be counted out. We asked Guay how the Xbox One version of Watch_Dogs would be different compared to the PC and PS4 versions of the game, to which he replied that, The Xbox One is a powerful platform, as of now we do not foresee a major difference in on screen result between the PS4 and the Xbox One. Obviously since we are still working on pushing the game on these new consoles, we are still doing R&D."

----

link

"We're still very much in the R&D period, that's what I call it, because the hardware is still new," Guay answered. "It's obvious to us that its going to take a little while before we can get to the full power of those machines and harness everything. But, even now we realise that both of them have comparable power, and for us thats good, but everyday it changes almost. Were pushing it and were going to continue doing that until [Watch Dogs] ship date."

http://www.eurogamer.net/articles/df-hardware-spec-analysis-durango-vs-orbis

"Other information has also come to light offering up a further Orbis advantage: the Sony hardware has a surprisingly large 32 ROPs (Render Output units) up against 16 on Durango. ROPs translate pixel and texel values into the final image sent to the display: on a very rough level, the more ROPs you have, the higher the resolution you can address (hardware anti-aliasing capability is also tied into the ROPs).16 ROPs is sufficient to maintain 1080p, 32 comes across as overkill, but it could be useful for addressing stereoscopic 1080p for instance, or even 4K. However, our sources suggest that Orbis is designed principally for displays with a maximum 1080p resolution."

http://www.polygon.com/2013/8/1/4580380/carmack-on-next-gen-console-hardware-very-close-very-good

Carmack on next-gen console hardware: 'very close,' 'very good'

http://www.inquisitr.com/953465/xbox-one-vs-playstation-4-game-developers-say-its-a-draw/

"When talking about the PlayStation 4 and Xbox One, John Carmack said they are both very close and very good. John Carmack is the id Software technical director responsible for Doom and Quake. Many other games use the game engines and graphics effects he had a hand in inventing. So when John Carmack says he feels the two gaming consoles bring essentially the same capabilities to developers, its probably wise to believe him."

http://www.edge-online.com/news/gaijin-games-on-why-war-thunder-isnt-coming-to-xbox-one/

How much more powerful?

AY: It depends what youre doing. GPU, like 40 per cent more powerful. DDR5 is basically 50 per cent more powerful than DDR3, but the memory write [performance] is bigger on Xbox One so it depends on what youre doing.

How is that going to translate to on-screen results for the kinds of games you want to make? So to optimise War Thunder on both consoles you could hypothetically make a better, prettier version on PS4?

AY: Yep.

KY: Probably yes. But again, thats not a very big deal.

http://www.edge-online.com/news/power-struggle-the-real-differences-between-ps4-and-xbox-one-performance/

Xbox One does, however, boast superior performance to PS4 in other ways. Lets say you are using procedural generation or raytracing via parametric surfaces that is, using a lot of memory writes and not much texturing or ALU Xbox One will be likely be faster, said one developer

Avatar image for Shewgenja
Shewgenja

21456

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#62 Shewgenja
Member since 2009 • 21456 Posts

I think the moral of the story is that you shouldn't pay more than $400 for a graphics card that doesn't have DDR3 and ESRAM.  Clearly, GDDR5 has just been hype all along from NVidia and ATI.

Avatar image for Krelian-co
Krelian-co

13274

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#63 Krelian-co
Member since 2006 • 13274 Posts

[QUOTE="Krelian-co"]

[QUOTE="blamix99"]In short ps4>>>xbone. Accept it already.. That's not gonna changeronvalencia

ron wont admit it, hes still arguing about the secret sauce, even when microsoftthemselves said the sauce is a lie.

Useless garbage

i also have a 7970, why does it matter again when you repeat it in every post you make?

now the question is does that makes the xbox better? no, is still weaker than the ps4, #dealwithit #secretsauceisalive

Avatar image for monsterpuncher
monsterpuncher

177

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#65 monsterpuncher
Member since 2013 • 177 Posts

I think the moral of the story is that you shouldn't pay more than $400 for a graphics card that doesn't have DDR3 and ESRAM.  Clearly, GDDR5 has just been hype all along from NVidia and ATI.

Shewgenja
But its all about the infinite powa of teh cloud. Microsoft said so Microsoft: "oh numbers dont mean anything, we have the power of the cloud"
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#66 ronvalencia
Member since 2008 • 29612 Posts

I think the moral of the story is that you shouldn't pay more than $400 for a graphics card that doesn't have DDR3 and ESRAM. Clearly, GDDR5 has just been hype all along from NVidia and ATI.

Shewgenja

On PCs, you can't make anything specific and X1's ESRAM is being specific.

It's easier to shovel tons of textures/data into fast large memory pool than specifically targeting 32 MB fast ESRAM and managing this small fast memory pool.

----

Side note

PS4 Shader Language is very similar to DirectX's HLSL(1)(2) which is similar to AMD's Mantle's support for DirectX's HLSL i.e. OpenGL can go to hell.

1. http://hexus.net/gaming/news/ps4/53521-sony-shares-details-ps4-gdc-2013/

2. http://vr-zone.com/articles/sony-delivers-new-details-on-ps4-during-gdc-2013-presentation/19394.html

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#67 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="ronvalencia"]

[QUOTE="Krelian-co"]

ron wont admit it, hes still arguing about the secret sauce, even when microsoftthemselves said the sauce is a lie.

Krelian-co

Useless garbage

i also have a 7970, why does it matter again when you repeat it in every post you make?

now the question is does that makes the xbox better? no, is still weaker than the ps4, #dealwithit #secretsauceisalive

7970 (@ 925Mhz)'s fill rate is higher than 7850 (@860Mhz) and both has 32 ROPS. At 1080p resolution, most games are not ROPS bound.

http://www.eurogamer.net/articles/digitalfoundry-microsoft-to-unlock-more-gpu-power-for-xbox-one-developers

"For example, consider a typical game scenario where the render target is 32bpp [bits per pixel] and blending is disabled, and the depth/stencil surface is 32bpp with Z [depth] enabled. That amount to 12 bytes of bandwidth needed per pixel drawn (eight bytes write, four bytes read). At our peak fill-rate of 13.65GPixels/s that adds up to 164GB/s of real bandwidth that is needed which pretty much saturates our ESRAM bandwidth. In this case, even if we had doubled the number of ROPs, the effective fill-rate would not have changed because we would be bottlenecked on bandwidth. In other words, we balanced our ROPs to our bandwidth for our target scenarios

Avatar image for Benny_Blakk
Benny_Blakk

910

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#68 Benny_Blakk
Member since 2007 • 910 Posts

The interview that DF did and the details of the X1 are wasted on the window lickers here in SW except for a select few. 

 

Most will continue to quote SonyFlops, and GDDR5 without any knowledge of how systems work and integrate with software.

 

The games will be where Sony Fan gets put in their place.  Just look at Ryse which is beautiful, yet Sony Fan is still talking about SonyFlops, and GDDR5.

kuu2

Trying to argue that games is where Microsoft will "win" is actually self defeating. An example is starting with Ryse to plead your case.

Avatar image for Krelian-co
Krelian-co

13274

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#69 Krelian-co
Member since 2006 • 13274 Posts

[QUOTE="Krelian-co"]

[QUOTE="ronvalencia"]

Useless garbage

ronvalencia

i also have a 7970, why does it matter again when you repeat it in every post you make?

now the question is does that makes the xbox better? no, is still weaker than the ps4, #dealwithit #secretsauceisalive

7970 (@ 925Mhz)'s fill rate is higher than 7850 (@860Mhz) and both has 32 ROPS.

again, slowly this time so you can understand the question: why does that matter?

the ps4 is still stronger than xbox, why comparing those two gpus matter?

Avatar image for NFJSupreme
NFJSupreme

6605

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#70 NFJSupreme
Member since 2005 • 6605 Posts
the PS4 does not have a significant memory advantage. Only fanboys think that. The only significant advantage the PS4 has over the xbone is it has a better GPU. That is it. Take that how ever you want to take it but that is it in terms of hardware advantages.
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#71 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="ronvalencia"][QUOTE="Krelian-co"]

i also have a 7970, why does it matter again when you repeat it in every post you make?

now the question is does that makes the xbox better? no, is still weaker than the ps4, #dealwithit #secretsauceisalive

Krelian-co

7970 (@ 925Mhz)'s fill rate is higher than 7850 (@860Mhz) and both has 32 ROPS.

again, slowly this time so you can understand the question: why does that matter?

the ps4 is still stronger than xbox, why comparing those two gpus matter?

On 32 ROPS issue, read http://www.eurogamer.net/articles/digitalfoundry-microsoft-to-unlock-more-gpu-power-for-xbox-one-developers

"For example, consider a typical game scenario where the render target is 32bpp [bits per pixel] and blending is disabled, and the depth/stencil surface is 32bpp with Z [depth] enabled. That amount to 12 bytes of bandwidth needed per pixel drawn (eight bytes write, four bytes read). At our peak fill-rate of 13.65GPixels/s that adds up to 164GB/s of real bandwidth that is needed which pretty much saturates our ESRAM bandwidth. In this case, even if we had doubled the number of ROPs, the effective fill-rate would not have changed because we would be bottlenecked on bandwidth. In other words, we balanced our ROPs to our bandwidth for our target scenarios

AMD's 32 ROPS was designed for resolutions higher than 1080p. Blame Sony for putting a cap on 1080p.

Avatar image for NFJSupreme
NFJSupreme

6605

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#72 NFJSupreme
Member since 2005 • 6605 Posts
the PS4 does not have a significant memory advantage. Only fanboys think that. The only significant advantage the PS4 has over the xbone is it has a better GPU. That is it. Take that how ever you want to take it but that is it in terms of hardware advantages.
Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#73 tormentos
Member since 2003 • 33793 Posts

 

i also have a 7970, why does it matter again when you repeat it in every post you make?

now the question is does that makes the xbox better? no, is still weaker than the ps4, #dealwithit #secretsauceisalive

Krelian-co

 

Is the believe that 32ROP are over kill for the PS4,because is 1080p is already known that the xbox one at 1080p can run into problems in graphics intensive games,the reason Forza doesn't is because it is a racing games with last gen  effects.

In fact Forza doesn't have day night or dynamic weather because it will be a hit to performance,and the game would not run at 1080p and 60 FPS.

32 ROP can be well use  in 1080p games like Crysis 3 do push them,the xbox one 32 MB of ESRAM is not enough,they would need like 48MB hell Killzone SF already has 48MB frame buffer if you tryed to fit Killzone on the xbox one the resolution would not hit 1080p.

 

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#74 ronvalencia
Member since 2008 • 29612 Posts
the PS4 does not have a significant memory advantage. Only fanboys think that. The only significant advantage the PS4 has over the xbone is it has a better GPU. That is it. Take that how ever you want to take it but that is it in terms of hardware advantages. NFJSupreme
That's correct i.e. slightly faster 7850 mild OC (e.g. 900Mhz) solution against a gimped prototype 7850 with 12 CUs.
Avatar image for monsterpuncher
monsterpuncher

177

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#76 monsterpuncher
Member since 2013 • 177 Posts
the PS4 does not have a significant memory advantage. Only fanboys think that. The only significant advantage the PS4 has over the xbone is it has a better GPU. That is it. Take that how ever you want to take it but that is it in terms of hardware advantages. NFJSupreme
My goodness, there is some hard core denial going on with you xbots Its bordering on mental illness. You are delusional. You xbots are delusional http://m.edge-online.com/news/power-struggle-the-real-differences-between-ps4-and-xbox-one-performance/

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#77 tormentos
Member since 2003 • 33793 Posts

ESRAM can be 1,000GB/s it doesn't change the fact that the GPU glue to it is gimped,you can give a 7790 the bandwidth of the 7970 and that would not make the 7790 beat the 7970 or work like it,bandwidth without power is a waste.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#78 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="NFJSupreme"]the PS4 does not have a significant memory advantage. Only fanboys think that. The only significant advantage the PS4 has over the xbone is it has a better GPU. That is it. Take that how ever you want to take it but that is it in terms of hardware advantages. monsterpuncher
My goodness, there is some hard core denial going on with you xbots Its bordering on mental illness. You are delusional. You xbots are delusional

My goodness, there is some hard core denial going on with you ponies

Its bordering on mental illness. You are delusional. You ponies are delusional.

In regards to memory bandwidth, you are ignoring a critical X1's ESRAM component .

http://www.edge-online.com/news/gaijin-games-on-why-war-thunder-isnt-coming-to-xbox-one/

How much more powerful?

AY: It depends what youre doing. GPU, like 40 per cent more powerful. DDR5 is basically 50 per cent more powerful than DDR3, but the memory write [performance] is bigger on Xbox One so it depends on what youre doing.

How is that going to translate to on-screen results for the kinds of games you want to make? So to optimise War Thunder on both consoles you could hypothetically make a better, prettier version on PS4?

AY: Yep.

KY: Probably yes. But again, thats not a very big deal.

Avatar image for NFJSupreme
NFJSupreme

6605

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#79 NFJSupreme
Member since 2005 • 6605 Posts

[QUOTE="NFJSupreme"]the PS4 does not have a significant memory advantage. Only fanboys think that. The only significant advantage the PS4 has over the xbone is it has a better GPU. That is it. Take that how ever you want to take it but that is it in terms of hardware advantages. monsterpuncher
My goodness, there is some hard core denial going on with you xbots Its bordering on mental illness. You are delusional. You xbots are delusional http://m.edge-online.com/news/power-struggle-the-real-differences-between-ps4-and-xbox-one-performance/

 

yes this "xbot" who is buying a PS4 is delusional.  Whatever you tell yourself to sleep at night but the facts are the facts.  The only meaningful advantage the PS4 has hardware wise is it's GPU.  In real world performance the memory are about the same.  Both have the same CPU.  The real difference is in the GPU and honestly that is the only difference you need to brag about anyway.

Avatar image for RimacBugatti
RimacBugatti

1632

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#83 RimacBugatti
Member since 2013 • 1632 Posts
I'm sorry to say but Wii U hardware is complete crap. The graphics are worse than current gen. Nintendo forgot that there is no point in buying a new system with the same and or worse graphics. Gameplay is important but if the game runs horribly than overall it's a horrible game especially if you can't play it.
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#84 ronvalencia
Member since 2008 • 29612 Posts

ESRAM can be 1,000GB/s it doesn't change the fact that the GPU glue to it is gimped,you can give a 7790 the bandwidth of the 7970 and that would not make the 7790 beat the 7970 or work like it,bandwidth without power is a waste.

tormentos

Would that 7790 with faster memory include a redesign with non-CU internal bus to support ~1000 GB/s of bandwidth?

Guess the non-CU internal ring bus width to support 1000 GB/s of raw bandwidth.

L1 cache 64 bytes per cycle x 14 CU x 1000Mhz = ~834 GB/s of raw compute bandwidth.

Existing 7790 can compute at this peak level at 16 bytes + wavefront buffer size x 14 CUs.

Existing 7970 can compute at this peak level at 16 bytes + wavefront buffer size x 32 CUs.

Any shader programs that exceeds this L1 cache + wave front buffer size is asking for a world of hurt.

AMD already shown a way to boost the existing Cayman's 2 triangles per cycle performance with larger cache.

http://www.tomshardware.com/reviews/768-shader-pitcairn-review,3196-5.html

0702%20Crysis2%20DX11.png

Both 7770 and prototype 7850 with 12 CUs has similar ALU power with the difference in frame rates are mostly contributed by non-CU hardware.

Avatar image for delta3074
delta3074

20003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#85 delta3074
Member since 2007 • 20003 Posts
the cows on this board seem to think that the GPU's on these consoles run in total isolation from the rest of the system, GPU power alone is not the defining factor in a consoles overall power, if that where the Case the 360 is substantially more powerful than the Ps3 because the Xenos kicks the crap out of the gimped 7900 in the ps3.
Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#86 tormentos
Member since 2003 • 33793 Posts

[QUOTE="tormentos"]

ESRAM can be 1,000GB/s it doesn't change the fact that the GPU glue to it is gimped,you can give a 7790 the bandwidth of the 7970 and that would not make the 7790 beat the 7970 or work like it,bandwidth without power is a waste.

ronvalencia

Would that 7790 with faster memory include a redesign with non-CU internal bus to support ~1000 GB/s of bandwidth?

Guess the non-CU internal ring bus width to support 1000 GB/s of raw bandwidth.

 

L1 cache 64 bytes per cycle x 14 CU x 1000Mhz = ~834 GB/s of raw compute bandwidth.

Existing 7790 can compute at this peak level at 16 bytes + wavefront buffer size x 14 CUs.

Existing 7970 can compute at this peak level at 16 bytes + wavefront buffer size x 32 CUs.

 

Any shader programs that exceeds this L1 cache + wave front buffer size is asking for a world of hurt.

 

 

AMD already shown a way to boost the existing Cayman's 2 triangles per cycle performance with larger cache.

 

 

 

http://www.tomshardware.com/reviews/768-shader-pitcairn-review,3196-5.html

0702%20Crysis2%20DX11.png

 

Both 7770 and prototype 7850 with 12 CUs has similar ALU power with the difference in frame rates are mostly contributed by non-CU hardware.

 

 

You did no address  my point,if you give a 7790 a 7970 bandwidth will it perform like the 7970.? Yes - No and why.?

 

The 7770 is capeverde that prototype is Pitcairn which is stronger even with lower clock while having just 2 CU more,the speed of the 7770 should make up for the CU difference but it doesn't because they are not the same.

 

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#87 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="Krelian-co"]

i also have a 7970, why does it matter again when you repeat it in every post you make?

now the question is does that makes the xbox better? no, is still weaker than the ps4, #dealwithit #secretsauceisalive

tormentos

Is the believe that 32ROP are over kill for the PS4,because is 1080p is already known that the xbox one at 1080p can run into problems in graphics intensive games,the reason Forza doesn't is because it is a racing games with last gen effects.

In fact Forza doesn't have day night or dynamic weather because it will be a hit to performance,and the game would not run at 1080p and 60 FPS.

32 ROP can be well use in 1080p games like Crysis 3 do push them,the xbox one 32 MB of ESRAM is not enough,they would need like 48MB hell Killzone SF already has 48MB frame buffer if you tryed to fit Killzone on the xbox one the resolution would not hit 1080p.

Crysis 3 is not bound by 32 ROPS since older DX11 cards has 32 ROPS.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#88 tormentos
Member since 2003 • 33793 Posts

the cows on this board seem to think that the GPU's on these consoles run in total isolation from the rest of the system, GPU power alone is not the defining factor in a consoles overall power, if that where the Case the 360 is substantially more powerful than the Ps3 because the Xenos kicks the crap out of the gimped 7900 in the ps3.delta3074

 

The PS3 has a gimped 7800GTX not a 7900.

 

If the PS3 had a Xenon for CPU just like the xbox 360 there would not be a single scenario where the PS3 beat the 360 or was even close to it,the only reason the PS3 was there and in some cases surpass it,is because cell actually helped the GPU,and while the Xenos had to do everything for it self,the RSX didn't have to hell Cell and the RSX shared a direct connection between the 2.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#89 tormentos
Member since 2003 • 33793 Posts

 Crysis 3 is not bound by 32 ROPS since older DX11 cards has 32 ROPS.

ronvalencia

 

I din't say it was bound dude...

 

Re read please..

 

Avatar image for Krelian-co
Krelian-co

13274

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#90 Krelian-co
Member since 2006 • 13274 Posts

[QUOTE="tormentos"]

ESRAM can be 1,000GB/s it doesn't change the fact that the GPU glue to it is gimped,you can give a 7790 the bandwidth of the 7970 and that would not make the 7790 beat the 7970 or work like it,bandwidth without power is a waste.

ronvalencia

Would that 7790 with faster memory include a redesign with non-CU internal bus to support ~1000 GB/s of bandwidth?

Guess the non-CU internal ring bus width to support 1000 GB/s of raw bandwidth.

 

L1 cache 64 bytes per cycle x 14 CU x 1000Mhz = ~834 GB/s of raw compute bandwidth.

Existing 7790 can compute at this peak level at 16 bytes + wavefront buffer size x 14 CUs.

Existing 7970 can compute at this peak level at 16 bytes + wavefront buffer size x 32 CUs.

 

Any shader programs that exceeds this L1 cache + wave front buffer size is asking for a world of hurt.

 

 

AMD already shown a way to boost the existing Cayman's 2 triangles per cycle performance with larger cache.

 

 

 

http://www.tomshardware.com/reviews/768-shader-pitcairn-review,3196-5.html

0702%20Crysis2%20DX11.png

 

Both 7770 and prototype 7850 with 12 CUs has similar ALU power with the difference in frame rates are mostly contributed by non-CU hardware.

 

 

 

enough ron, xbox is weaker than ps4, stop posting crap

57360-the-cake-is-a-lie-p9AZ.jpeg

Avatar image for delta3074
delta3074

20003

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#91 delta3074
Member since 2007 • 20003 Posts
and while the Xenos had to do everything for it selftormentos
Incorrect, the 360 has plenty of games that utilise CPU Anti-Aliasing like FXAA and MLAA, Metro 2033 uses CPU based AA. 'The PS3 may lose one of its graphical advantages over the Xbox 360, with a team working on implementing a version of Sonys MLAA anti-aliasing technique seen in such games as God of War 3 and Killzone 3 on Microsofts console and they claim that theyve raised the quality bar considerably over other versions. ' http://beefjack.com/news/xbox-360-to-get-ps3-style-mlaa-anti-aliasing-quality-bar-raised/
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#92 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="ronvalencia"]

[QUOTE="tormentos"]

ESRAM can be 1,000GB/s it doesn't change the fact that the GPU glue to it is gimped,you can give a 7790 the bandwidth of the 7970 and that would not make the 7790 beat the 7970 or work like it,bandwidth without power is a waste.

tormentos

Would that 7790 with faster memory include a redesign with non-CU internal bus to support ~1000 GB/s of bandwidth?

Guess the non-CU internal ring bus width to support 1000 GB/s of raw bandwidth.

L1 cache 64 bytes per cycle x 14 CU x 1000Mhz = ~834 GB/s of raw compute bandwidth.

Existing 7790 can compute at this peak level at 16 bytes + wavefront buffer size x 14 CUs.

Existing 7970 can compute at this peak level at 16 bytes + wavefront buffer size x 32 CUs.

Any shader programs that exceeds this L1 cache + wave front buffer size is asking for a world of hurt.

AMD already shown a way to boost the existing Cayman's 2 triangles per cycle performance with larger cache.

http://www.tomshardware.com/reviews/768-shader-pitcairn-review,3196-5.html

0702%20Crysis2%20DX11.png

Both 7770 and prototype 7850 with 12 CUs has similar ALU power with the difference in frame rates are mostly contributed by non-CU hardware.

You did no address my point,if you give a 7790 a 7970 bandwidth will it perform like the 7970.? Yes - No and why.?

The 7770 is capeverde that prototype is Pitcairn which is stronger even with lower clock while having just 2 CU more,the speed of the 7770 should make up for the CU difference but it doesn't because they are not the same.

There's a bandwidth gap between Wavefront buffer/L1 cache and rest of the chip.

If CU has 32 KB L1 cache, it can support shader programs that is twice the size or double the shader programs for execution. The improvments with L1 cache wouldn't be 2X when compared to adding ALU + L1 cache combo.

7970 has 32 CUs which is double the L1 cache storage when compared to 7850's 16 CUs.

-----------

On pure CU bounded workloads, CU ~= CU e.g. Bitcoin. With Bitcoin, GCN roughly scales with CU count. AMD basically copy and paste it's CU design to scale with SKUs.

Only 7870 XT/79x0 has sightly different CU design which is due to it's practical 64bit DP FP support.

7770 has 72 GB/s memory bandwidth and single link to L2 cache. 7770 has a gimped 1 triangle per cycle rate.

Prototype 7850 has 153.6/s memory bandwidth and four links to L2 cache. Both X1 and Prototype 7850 has 2 triangles per cycle rate.

Both prototype-7850 with 12 CUs and X1's 12 CU GCN has the same L1 cache count and wave front buffer size.

On AMD GCN, L2 cache is use for texture cache.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#93 tormentos
Member since 2003 • 33793 Posts

Incorrect, the 360 has plenty of games that utilise CPU Anti-Aliasing like FXAA and MLAA, Metro 2033 uses CPU based AA. 'The PS3 may lose one of its graphical advantages over the Xbox 360, with a team working on implementing a version of Sonys MLAA anti-aliasing technique seen in such games as God of War 3 and Killzone 3 on Microsofts console and they claim that theyve raised the quality bar considerably over other versions. ' http://beefjack.com/news/xbox-360-to-get-ps3-style-mlaa-anti-aliasing-quality-bar-raised/delta3074

 

Dude FXAA has a very small hit in performance,so yeah outside crappy AA,(which should have been free thanks to EDRAM but it wasn't) the Xenon did basically nothing,compare that to poss proccessing Cell did,physics.? Cell was a monsters in Physics which is why it was compare to the Ageia card back on the day.

Those things were done on the GPU on xbox 360,and every single thing the Xenos had to do for it self it was 1 more thing that cost performance wise.

Avatar image for tormentos
tormentos

33793

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#94 tormentos
Member since 2003 • 33793 Posts

 

On pure CU bounded workloads, CU ~= CU e.g. Bitcoin. With Bitcoin, GCN roughly scales with CU count. AMD basically copy and paste it's CU design to scale with SKUs.

Only 7870 XT/79x0 has sightly different CU design which is due to practical 64bit DP FP support.

 

7770 has 72 GB/s memory bandwidth and single link to L2 cache. 7770 has a gimped 1 triangle per cycle rate.

Prototype 7850 has 153.6/s memory bandwidth and four links to L2 cache. Both X1 and Prototype 7850 has 2 triangles per cycle rate.

Both prototype-7850 with 12 CUs and X1's 12 CU GCN has the same L1 cache count and wave front buffer size.

 

On AMD GCN, L2 cache is use for texture cache.

 

ronvalencia

 

You are evading my question..

 

Will the 7790 perform like a 7970 just because you give it 7970 bandwidth yes or no.? and why..?



The 7870 also has 2 triangles like the 7850 doesn't stop it from beating it because it has 4 more CU and higher clock speed,in this case the PS4 has 6 more CU and a little less clock speed,while the xbox one officially has 10% GPU reservation.

 

 

Avatar image for superclocked
superclocked

5864

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#95 superclocked
Member since 2009 • 5864 Posts
[QUOTE="Kurt-Biz"]X1 GPU: 1.18 TF GPU (12 CUs) for games 768 Shaders 48 Texture units 16 ROPS 2 ACE/ 16 queues PS4 GPU: 1.84TF GPU ( 18 CUs) for games + 56% 1152 Shaders +50% 72 Texture units +50% 32 ROPS + 100% 8 ACE/64 queues +400%

768sp * 2ops * 852MHz = 1.31TFLOPS for the XB1 1.84 TFLOPS - 29% = 1.31 TFLOPS You can try and say that it's 1.18 TFLOPS because part of the XB1 GPU is reserved, but the same is true for the PS4, Sony just won't admit how much yet.. Also, the XB1 can push more polygon's per second, the move engines make cloud computing a reality on current broadband connections, the move engines and eSRAM were also made for tiled texture straming, the XB1 has a superior audio chip.. The PS4 does have it's advantages, but the XB1 has advantages, too...
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#96 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="ronvalencia"]

On pure CU bounded workloads, CU ~= CU e.g. Bitcoin. With Bitcoin, GCN roughly scales with CU count. AMD basically copy and paste it's CU design to scale with SKUs.

Only 7870 XT/79x0 has sightly different CU design which is due to practical 64bit DP FP support.

7770 has 72 GB/s memory bandwidth and single link to L2 cache. 7770 has a gimped 1 triangle per cycle rate.

Prototype 7850 has 153.6/s memory bandwidth and four links to L2 cache. Both X1 and Prototype 7850 has 2 triangles per cycle rate.

Both prototype-7850 with 12 CUs and X1's 12 CU GCN has the same L1 cache count and wave front buffer size.

On AMD GCN, L2 cache is use for texture cache.

tormentos

You are evading my question..

Will the 7790 perform like a 7970 just because you give it 7970 bandwidth yes or no.? and why..?



The 7870 also has 2 triangles like the 7850 doesn't stop it from beating it because it has 4 more CU and higher clock speed,in this case the PS4 has 6 more CU and a little less clock speed,while the xbox one officially has 10% GPU reservation.

I haven't finished my post.

There's a bandwidth gap between Wavefront buffer/L1 cache and rest of the chip.

If CU has 32 KB L1 cache, it can support shader programs that is twice the size or double the shader programs for execution.

The improvements with L1 cache wouldn't be 2X when compared to adding ALU + L1 cache combo.

7970 has 32 CUs which is double the L1 cache storage when compared to 7850's 16 CUs.

------------

Each rendering pass contributes to the total rendering time and different hardware setups changes these factors.

Avatar image for deactivated-5ba16896d1cc2
deactivated-5ba16896d1cc2

2504

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#97 deactivated-5ba16896d1cc2
Member since 2013 • 2504 Posts
the cows on this board seem to think that the GPU's on these consoles run in total isolation from the rest of the system, GPU power alone is not the defining factor in a consoles overall power, if that where the Case the 360 is substantially more powerful than the Ps3 because the Xenos kicks the crap out of the gimped 7900 in the ps3.delta3074
too bad both consoles use the same exact 8 Core Jaguar CPU, so we can ACTUALLY look at the GPU on paper since they are also both GCN architecture GPU's
Avatar image for deactivated-5ba16896d1cc2
deactivated-5ba16896d1cc2

2504

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#98 deactivated-5ba16896d1cc2
Member since 2013 • 2504 Posts

the cows on this board seem to think that the GPU's on these consoles run in total isolation from the rest of the system, GPU power alone is not the defining factor in a consoles overall power, if that where the Case the 360 is substantially more powerful than the Ps3 because the Xenos kicks the crap out of the gimped 7900 in the ps3.delta3074
too bad both consoles use the same exact 8 Core Jaguar CPU, so we can ACTUALLY look at the GPU on paper since they are also both GCN architecture GPU's
[QUOTE="Kurt-Biz"]X1 GPU: 1.18 TF GPU (12 CUs) for games 768 Shaders 48 Texture units 16 ROPS 2 ACE/ 16 queues PS4 GPU: 1.84TF GPU ( 18 CUs) for games + 56% 1152 Shaders +50% 72 Texture units +50% 32 ROPS + 100% 8 ACE/64 queues +400%superclocked
768sp * 2ops * 852MHz = 1.31TFLOPS for the XB1 1.84 TFLOPS - 29% = 1.31 TFLOPS You can try and say that it's 1.18 TFLOPS because part of the XB1 GPU is reserved, but the same is true for the PS4, Sony just won't admit how much yet.. Also, the XB1 can push more polygon's per second, the move engines make cloud computing a reality on current broadband connections, the move engines and eSRAM were also made for tiled texture straming, the XB1 has a superior audio chip.. The PS4 does have it's advantages, but the XB1 has advantages, too...

at the end of the day, even with all this stuff that the XB1 may have and that the ps4 may not have (which it already does have special purpose processors in the PS4 apu just like the Xbox One,

the ps4 IS still stronger and performs better then the Xbox One

 

the PS4 is the more capable machine, without a doubt 

Avatar image for GravityX
GravityX

865

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#99 GravityX
Member since 2013 • 865 Posts

[QUOTE="delta3074"] too bad both consoles use the same exact 8 Core Jaguar CPU, so we can ACTUALLY look at the GPU on paper since they are also both GCN architecture GPU's [QUOTE="superclocked"][QUOTE="Kurt-Biz"]X1 GPU: 1.18 TF GPU (12 CUs) for games 768 Shaders 48 Texture units 16 ROPS 2 ACE/ 16 queues PS4 GPU: 1.84TF GPU ( 18 CUs) for games + 56% 1152 Shaders +50% 72 Texture units +50% 32 ROPS + 100% 8 ACE/64 queues +400%xboxiphoneps3

768sp * 2ops * 852MHz = 1.31TFLOPS for the XB1 1.84 TFLOPS - 29% = 1.31 TFLOPS You can try and say that it's 1.18 TFLOPS because part of the XB1 GPU is reserved, but the same is true for the PS4, Sony just won't admit how much yet.. Also, the XB1 can push more polygon's per second, the move engines make cloud computing a reality on current broadband connections, the move engines and eSRAM were also made for tiled texture straming, the XB1 has a superior audio chip.. The PS4 does have it's advantages, but the XB1 has advantages, too...

at the end of the day, even with all this stuff that the XB1 may have and that the ps4 may not have (which it already does have special purpose processors in the PS4 apu just like the Xbox One,

the ps4 IS still stronger and performs better then the Xbox One

 

the PS4 is the more capable machine, without a doubt 

But what about the games? Let's look at the performance of the games.

Knack doesn't seem to be pushing the visuals too much and yet it is having frame rate issues.

KZSF looks great for a genre that already looked great in previous generations. It doesn't take a great leap to make it look super good. Plus there appears to be frame rate issue when you a multiple players in a multiplayer skirmish with an explosion going off.

So it appears that the PS4 will have frame rate issues.

Now lets look at XB1 games.

Ryse looks to be the most detail and best looking game on a technical level. However the animations are awful, not the hardwares fault.

Dead Rising 3 looks very good and with a ton of stuff going on with no more frame rate drops, after the updated drivers and cpu clock increase. Imagine what a linear game like Uncharted or Last of Us would like if on the X1. It's incredible to think off.

So in closing the difference in overall visual fidelity will be non existent.

However the choice to go with GDDR5 and a weak CPU may present frame issues for the PS4.