8GB DDR3 + 32MB ESRAM vs 8GB GDDR5

  • 82 results
  • 1
  • 2

This topic is locked from further discussion.

Avatar image for Tighaman
Tighaman

1038

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#51 Tighaman
Member since 2006 • 1038 Posts

OMG do we have to do this again and again? 32 ROPs don't mean shit if you don't have the bandwth to use all of them efficiently. and the X1 has 8GB of ddr3+8GB of NAND FLASH+47mb of ESRAM, and yes they all go together and its a they are all helping each other please read what are the advantages of them together you would be surprised.

Avatar image for jhcho2
jhcho2

5103

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 2

#52  Edited By jhcho2
Member since 2004 • 5103 Posts

Obviously there is a lot of misconception about the ESRAM.

While the 8MB DDR3 and 8MB DDR5 RAM are meant to act as a cache to store game assets to be called upon at a later time, the ESRAM doesn't quite function in that way. I look at the ESRAM as an intercessor between the GPU and DDR3 RAM. GPUs normally operate on DDR5 RAM. The ESRAM is to make up for that DDR5 shortcoming. That also means that the ESRAM is an introduction of another variable between the GPU and RAM which isn't normally there. It's true that the ESRAM has very high bandwidth, but it's likely to be squandered by poor utilization and programming for the RAM. Having more things to program for can't be a good thing.

Imagine if your CPU has to go through another set of processors before accessing the RAM. And that separate set of processors require their own code to be used. The best thing would be to not have it in the first place.

Avatar image for deactivated-57d8401f17c55
deactivated-57d8401f17c55

7221

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 3

#53 deactivated-57d8401f17c55
Member since 2012 • 7221 Posts

with regards to the esram, put it this way. wii u is essentially in the ballpark of the 360 raw power wise, but far more modern and effecient with twice the memory.

it's designed for 720p output, and it also has 32mb's of eDRAM. xbone has as much as a console close in raw power to last generation, which is a big problem.

not only that, even if there was more of it, it still doesn't match gddr5's bandwidth.

Avatar image for navyguy21
navyguy21

17952

Forum Posts

0

Wiki Points

0

Followers

Reviews: 10

User Lists: 0

#54  Edited By navyguy21
Member since 2003 • 17952 Posts

@Chozofication said:

with regards to the esram, put it this way. wii u is essentially in the ballpark of the 360 raw power wise, but far more modern and effecient with twice the memory.

it's designed for 720p output, and it also has 32mb's of eDRAM. xbone has as much as a console close in raw power to last generation, which is a big problem.

not only that, even if there was more of it, it still doesn't match gddr5's bandwidth.

eDRAM and eSRAM are very different...

WiiU has eDRAM, which is what 360 used.

XB1 uses eSRAM, which is a huge difference.

Avatar image for deactivated-57d8401f17c55
deactivated-57d8401f17c55

7221

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 3

#55  Edited By deactivated-57d8401f17c55
Member since 2012 • 7221 Posts

@navyguy21 said:

@Chozofication said:

with regards to the esram, put it this way. wii u is essentially in the ballpark of the 360 raw power wise, but far more modern and effecient with twice the memory.

it's designed for 720p output, and it also has 32mb's of eDRAM. xbone has as much as a console close in raw power to last generation, which is a big problem.

not only that, even if there was more of it, it still doesn't match gddr5's bandwidth.

eDRAM and eSRAM are very different...

WiiU has eDRAM, which is what 360 used.

XB1 uses eSRAM, which is a huge difference.

not a huge difference, actually. it would be, if not for the fact that due to some problem the xbone has, the esram has to be manually flushed unlike how static memory is supposed to be.

the xbone is a bit of a train wreck architecturally.

also, the wii u's edram is pretty damn fast, i don't think the bone's esram is even 50% faster. it's in a different leauge than 360's edram which is8 years old.

Avatar image for 04dcarraher
04dcarraher

23858

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#56  Edited By 04dcarraher
Member since 2004 • 23858 Posts

@Chozofication said:

@04dcarraher said:

@ShepardCommandr said:

RAM doesn't do much when the GPU is roughly 50% weaker.

its not 50% is around 30% slower.

how could you come to this conclusion? in fact it's more than 50% stronger since kinect saps 10% of the gpu. although the xbone's upclock somewhat mitigates that, but it's still a bit more than 50%. that's without factoring the rops and compute advantages.

also, xbone doesn't even have a cpu advantage either, seems like the os and kinect really eat into that as well. either one more core is taken up by the os, or the ps4 is higher clocked, it's probably the former though considering ps4's size.

combining all the ps4's advantages, bandwidth being the biggest, it isn't quite farfetched to say that ps4 is close to double xbone in a lot of situations. the xbone is just too bottlenecked, is focused on kinect and a bloated os, and is just flat out weaker.

not to mention the nasty sharpening upscale and crushed blacks, which 360 had the latter problem as well.

You dont think PS4 gpu has no allocated reserved itself? Also even with a 10% allocation, the X1 still has about 1.2 TFLOP, which is 66% of 1.8 TFLOP, which means a 34% gap in total gpu processing power. and it shows with games like BF4 only seeing roughly 36% difference in resolution 720p vs 900p or even AC4 900p vs 1080p. Also the difference in texture fillrate between the two gpu's is only 29%.

Where are you getting the X1 having the OS or kinect eating into it? Both the X1 and PS4 OS allocate two cpu cores and the PS4 OS uses more memory then MS's.

The memory bandwidth on the PS4 has no real gains over DDR3 since the memory is just for cache where the PS4's advantage lies with its gpu being less then 40% stronger overall.

Avatar image for navyguy21
navyguy21

17952

Forum Posts

0

Wiki Points

0

Followers

Reviews: 10

User Lists: 0

#57  Edited By navyguy21
Member since 2003 • 17952 Posts

@Chozofication said:

@navyguy21 said:

@Chozofication said:

with regards to the esram, put it this way. wii u is essentially in the ballpark of the 360 raw power wise, but far more modern and effecient with twice the memory.

it's designed for 720p output, and it also has 32mb's of eDRAM. xbone has as much as a console close in raw power to last generation, which is a big problem.

not only that, even if there was more of it, it still doesn't match gddr5's bandwidth.

eDRAM and eSRAM are very different...

WiiU has eDRAM, which is what 360 used.

XB1 uses eSRAM, which is a huge difference.

not a huge difference, actually. it would be, if not for the fact that due to some problem the xbone has, the esram has to be manually flushed unlike how static memory is supposed to be.

the xbone is a bit of a train wreck architecturally.

also, the wii u's edram is pretty damn fast, i don't think the bone's esram is even 50% faster. it's in a different leauge than 360's edram which is8 years old.

Well there are a few differences.

1. The ESRAM in XB1 is integrated into the system, and can read/write simutaneously

2. ESRAM can overflow to DDR3, not so with EDRAM

3. Within 32MB of ESRAM, you can store up to 6GB of tiled resources.

Im merely pointing out how it is superior to EDRAM. ESRAM will never make up for the entire difference with GDDR5 since devs with have to account for it, tile resources, shift assets, etc...............rather than have a complete pool of direct access RAM

Avatar image for Pray_to_me
Pray_to_me

4041

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#58  Edited By Pray_to_me
Member since 2011 • 4041 Posts

@Thunder7151 said:

8GB DDR3 + 32MB ESRAM vs 8GB GDDR5

will one of these RAM solutions perform noticeably better 2 years from now onwards?

Two years? Game's are already double the resolution on PS4 vs XB1. Didn't even take 2 days

Avatar image for navyguy21
navyguy21

17952

Forum Posts

0

Wiki Points

0

Followers

Reviews: 10

User Lists: 0

#59 navyguy21
Member since 2003 • 17952 Posts

@Pray_to_me said:

@Thunder7151 said:

8GB DDR3 + 32MB ESRAM vs 8GB GDDR5

will one of these RAM solutions perform noticeably better 2 years from now onwards?

Two years? Game's are already double the resolution on PS4 vs XB1. Didn't even take 2 days

Is that a serious response?

Avatar image for deactivated-57d8401f17c55
deactivated-57d8401f17c55

7221

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 3

#60 deactivated-57d8401f17c55
Member since 2012 • 7221 Posts

@04dcarraher said:

@Chozofication said:

@04dcarraher said:

@ShepardCommandr said:

RAM doesn't do much when the GPU is roughly 50% weaker.

its not 50% is around 30% slower.

how could you come to this conclusion? in fact it's more than 50% stronger since kinect saps 10% of the gpu. although the xbone's upclock somewhat mitigates that, but it's still a bit more than 50%. that's without factoring the rops and compute advantages.

also, xbone doesn't even have a cpu advantage either, seems like the os and kinect really eat into that as well. either one more core is taken up by the os, or the ps4 is higher clocked, it's probably the former though considering ps4's size.

combining all the ps4's advantages, bandwidth being the biggest, it isn't quite farfetched to say that ps4 is close to double xbone in a lot of situations. the xbone is just too bottlenecked, is focused on kinect and a bloated os, and is just flat out weaker.

not to mention the nasty sharpening upscale and crushed blacks, which 360 had the latter problem as well.

You dont think PS4 gpu has no allocated reserved itself? Also even with a 10% allocation, the X1 still has about 1.2 TFLOP, which is 66% of 1.8 TFLOP, which means a 34% gap in total gpu processing power. and it shows with games like BF4 only seeing roughly 36% difference in resolution 720p vs 900p or even AC4 900p vs 1080p. Also the difference in texture fillrate between the two gpu's is only 29%.

Where are you getting the X1 having the OS or kinect eating into it? Both the X1 and PS4 OS allocate two cpu cores and the PS4 OS uses more memory then MS's.

The memory bandwidth on the PS4 has no real gains over DDR3 since the memory is just for cache where the PS4's advantage lies with its gpu being less then 40% stronger overall.

first of all, your math and wording is messed up.

ps4's gpu is 50% or more stronger. xbone's gpu is around 30% weaker, the other way. still, ps4 has more than a 50% advantage, and again that's only factoring the cu's.

anyways, yes ps4 did have 2 cores allocated, but developers and neogaf insiders claim to get more out of ps4's cpu than xbone's, and a lot could have changed since feb., after all. these console os's are always being made more efficient. i admit it's a bit murky and even the clockspeed hasn't been disclosed so i'll call that one a probability.

there's no reason to suggest there's gpu allocation on ps4, no. kinect is the culprit of that on xbone, just as games on 360 with kinect had access to less gpu power.

that last bit is beyond asinine, ps4's bandwidth is it's biggest advantage without a doubt.

Avatar image for deactivated-57d8401f17c55
deactivated-57d8401f17c55

7221

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 3

#61  Edited By deactivated-57d8401f17c55
Member since 2012 • 7221 Posts

@navyguy21:

never said there were no differences, just that the esram's would be biggest advantage isn't present, and it also isn't all that faster than wii u's.

also, wii u's edram can do some tricks too, it can also be used as cpu cache for one, and not all the info is out there since nintendo doesn't disclose anything about their hardware anymore.

but yes, the esram in xbone is still quite a bit better.

Avatar image for Pray_to_me
Pray_to_me

4041

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#62 Pray_to_me
Member since 2011 • 4041 Posts

also it's a ridiculous excuse to blame the mutibillion dollar consoles performance on "teh drivers"... then again... this is Microsoft LOL!

Avatar image for Tighaman
Tighaman

1038

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#63 Tighaman
Member since 2006 • 1038 Posts

X1 doesn't have a 7770 or a 7790 or a 7850 prototype its a down clock r7 260x look at the specs I said from the get go you don't partner for old tech and yes its more powerful than the ps4 but old drivers on old engines is slowing advancement its not straight forward like the PS4 that's why you are seeing the advantage.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#64  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Tighaman said:

X1 doesn't have a 7770 or a 7790 or a 7850 prototype its a down clock r7 260x look at the specs I said from the get go you don't partner for old tech and yes its more powerful than the ps4 but old drivers on old engines is slowing advancement its not straight forward like the PS4 that's why you are seeing the advantage.

R7 260X is just a rename/tweaked 7790.

7790 already has AMD's TruAudio hardware LOL... AMD haven't enabled 7790's TruAudio via drivers.

The main reason for my use of the prototype-7850 is for memory bandwidth, active CU count, tessellation and clock speed matching with X1. Disabled CUs has no impact with performance.

Avatar image for 04dcarraher
04dcarraher

23858

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#65  Edited By 04dcarraher
Member since 2004 • 23858 Posts

@Chozofication said:

@04dcarraher said:

@Chozofication said:

@04dcarraher said:

@ShepardCommandr said:

RAM doesn't do much when the GPU is roughly 50% weaker.

its not 50% is around 30% slower.

how could you come to this conclusion? in fact it's more than 50% stronger since kinect saps 10% of the gpu. although the xbone's upclock somewhat mitigates that, but it's still a bit more than 50%. that's without factoring the rops and compute advantages.

also, xbone doesn't even have a cpu advantage either, seems like the os and kinect really eat into that as well. either one more core is taken up by the os, or the ps4 is higher clocked, it's probably the former though considering ps4's size.

combining all the ps4's advantages, bandwidth being the biggest, it isn't quite farfetched to say that ps4 is close to double xbone in a lot of situations. the xbone is just too bottlenecked, is focused on kinect and a bloated os, and is just flat out weaker.

not to mention the nasty sharpening upscale and crushed blacks, which 360 had the latter problem as well.

You dont think PS4 gpu has no allocated reserved itself? Also even with a 10% allocation, the X1 still has about 1.2 TFLOP, which is 66% of 1.8 TFLOP, which means a 34% gap in total gpu processing power. and it shows with games like BF4 only seeing roughly 36% difference in resolution 720p vs 900p or even AC4 900p vs 1080p. Also the difference in texture fillrate between the two gpu's is only 29%.

Where are you getting the X1 having the OS or kinect eating into it? Both the X1 and PS4 OS allocate two cpu cores and the PS4 OS uses more memory then MS's.

The memory bandwidth on the PS4 has no real gains over DDR3 since the memory is just for cache where the PS4's advantage lies with its gpu being less then 40% stronger overall.

first of all, your math and wording is messed up.

ps4's gpu is 50% or more stronger. xbone's gpu is around 30% weaker, the other way. still, ps4 has more than a 50% advantage, and again that's only factoring the cu's.

anyways, yes ps4 did have 2 cores allocated, but developers and neogaf insiders claim to get more out of ps4's cpu than xbone's, and a lot could have changed since feb., after all. these console os's are always being made more efficient. i admit it's a bit murky and even the clockspeed hasn't been disclosed so i'll call that one a probability.

there's no reason to suggest there's gpu allocation on ps4, no. kinect is the culprit of that on xbone, just as games on 360 with kinect had access to less gpu power.

that last bit is beyond asinine, ps4's bandwidth is it's biggest advantage without a doubt.

The PS4 gpu is not flat out 50% stronger, its anywhere from 33-40% stronger overall. There is only one area where the PS4 gpu has a true major advantage over X1 and that is Gigapixel fillrate. Everything else is only in the 30% ranges.

Sony already stated that the PS4 OS and features has 3.5gb and two cores allocated for those tasks. The X1 also has two cores allocated and 3gb allocated for OS and features. Both consoles are in the same boat when it comes resources being allocated for those jobs.

Also there is gpu allocation for the PS4, heck even when the PS4 was being tested had a 14+4CU reserve ratio for compute tasks. and Sony removed the 4 CU requirement to allow the developers to use any ratio they wanted. Also with PS4 camera also going to have facial and voice recognition some of that processing needs will be allocated to the gpu because they only have two cores. The PS4 has abit more reserve pool to tap from without affecting like the X1. So people shouldnt just assume the PS4 gpu is 100% allocated for games and automatically do a 100%>90% ratio when comparing.

The PS4 bandwidth does not mean much when your target is 1080 or lower and that the cpu only has a 20gb/s lane. The GDDR5 vs DDR3 +ESRAM will not yield any real difference in performance nor any graphical improvements

Avatar image for Tighaman
Tighaman

1038

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#66 Tighaman
Member since 2006 • 1038 Posts

@ronvalencia: you say retweaked or rebranded but benchmarks are better than both of them GPUs yall talked about. Shape was inhouse AMD had nothing to do with that SHAPE WAS ALL MICROSOFT so don't LOL me like I said don't pay for old tech. Even then the Tru Audio that's in the r9 290 is not better than SHAPE

Avatar image for curiousarman
curiousarman

48

Forum Posts

0

Wiki Points

0

Followers

Reviews: 15

User Lists: 0

#67 curiousarman
Member since 2011 • 48 Posts

So, the RAM does not have much different, but the processor has?

Avatar image for indigenous_euphoria
indigenous_euphoria

255

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#68 indigenous_euphoria
Member since 2013 • 255 Posts

I think in the long run X1's memory setup will prove to be better. 176gb/sec is peak on paper for GDDR5. X1 ESRAM can do read/write cycles simultaneously so peak on paper is 272gb/sec. Plus it has 3X the coherent bandwidth at 30gb/sec. Ps4 is at 20gb/sec. X1 uses les mem for the OS to i believe. And 12 CU vs 14+4CU reserve ratio for compute tasks isn't going to give ps4 much of an advantage. But we shall see

Avatar image for Mr-Kutaragi
Mr-Kutaragi

2466

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#69 Mr-Kutaragi
Member since 2013 • 2466 Posts

TLHBOWNED

Avatar image for Grey_Eyed_Elf
Grey_Eyed_Elf

7971

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#70 Grey_Eyed_Elf
Member since 2011 • 7971 Posts

LOL...

I would get 0% GAMING performance increase if I had 8GB 1333Mhz RAM and upgraded to 16GB 2400MHz ram.

Consolites please for the love of God stop talking about the difference's between the PS4/X1.

The core hardware is borderline the same aside from the GPU in the PS4 being 20-30% better. That's really the only note worthy difference.

/THREAD

Avatar image for killzowned24
killzowned24

7345

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#71  Edited By killzowned24
Member since 2007 • 7345 Posts

@indigenous_euphoria said:

I think in the long run X1's memory setup will prove to be better. 176gb/sec is peak on paper for GDDR5. X1 ESRAM can do read/write cycles simultaneously so peak on paper is 272gb/sec. Plus it has 3X the coherent bandwidth at 30gb/sec. Ps4 is at 20gb/sec. X1 uses les mem for the OS to i believe. And 12 CU vs 14+4CU reserve ratio for compute tasks isn't going to give ps4 much of an advantage. But we shall see

xbone is doomed. It's weaker in everything from CPU/GPU and ram. Xbone will be left in the dust with only 12 compute units VS 18 on PS4.

Avatar image for MK-Professor
MK-Professor

4218

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#72 MK-Professor
Member since 2009 • 4218 Posts

I believe you all are missing the point here, it doesn't matter what ram the xbox have.

let say that the xboxOne have a the same ram speed like the ps4, the xboxOne will still perform the same, simply because it will held back by the weak gpu.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#73  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Tighaman said:

@ronvalencia: you say retweaked or rebranded but benchmarks are better than both of them GPUs yall talked about. Shape was inhouse AMD had nothing to do with that SHAPE WAS ALL MICROSOFT so don't LOL me like I said don't pay for old tech. Even then the Tru Audio that's in the r9 290 is not better than SHAPE

LOL. On R9-260X vs 7790, the improvements are minor or non-existent.

http://techreport.com/review/25473/amd-radeon-r7-260x-graphics-card-reviewed/7

The point with AMD TruAudio is the renamed nature with 7790 and R7-260X.

The only new Radeon HD chip design is just R9-290/R9-290X and everything else is just tweaks and renames.

.

.

.

.

http://www.techspot.com/review/722-radeon-r9-270x-r7-260x/page3.html

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#74  Edited By ronvalencia
Member since 2008 • 29612 Posts
@killzowned24 said:

@indigenous_euphoria said:

I think in the long run X1's memory setup will prove to be better. 176gb/sec is peak on paper for GDDR5. X1 ESRAM can do read/write cycles simultaneously so peak on paper is 272gb/sec. Plus it has 3X the coherent bandwidth at 30gb/sec. Ps4 is at 20gb/sec. X1 uses les mem for the OS to i believe. And 12 CU vs 14+4CU reserve ratio for compute tasks isn't going to give ps4 much of an advantage. But we shall see

xbone is doomed. It's weaker in everything from CPU/GPU and ram. Xbone will be left in the dust with only 12 compute units VS 18 on PS4.

Xbone's GCN has 2 ACE units i.e. it's not NVIDIA Fermi nor AMD Northern Islands.

For multiple compute queues technology, NVIDIA is in-sync with AMD on this case.

http://www.anandtech.com/show/5840/gtc-2012-part-1-nvidia-announces-gk104-based-tesla-k10-gk110-based-tesla-k20/2

From http://www.theregister.co.uk/2012/09/18/nvidia_tesla_k20_benchmarks/

NVIDIA Hyper-Q is only available for GK110.

For consumer GK110 = GeForce 780 to 780 Ti.

Avatar image for stereointegrity
stereointegrity

12151

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#75 stereointegrity
Member since 2007 • 12151 Posts

@Tighaman: you lose all credibility coming in here and using misterxmedia numbers as if they were real...

47mb of esram?

260x?

More powerful then PS4....

Come on son

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#76  Edited By ronvalencia
Member since 2008 • 29612 Posts

@stereointegrity said:

@Tighaman: you lose all credibility coming in here and using misterxmedia numbers as if they were real...

47mb of esram?

260x?

More powerful then PS4....

Come on son

On 47MB eSRAM issue... read http://www.pcper.com/news/General-Tech/Microsoft-Shows-Xbox-One-SoC-Hot-Chips-25

"The Xbox One SoC has 47MB of on-chip memory including 32MB eSRAM used by the CPU and GPU and 64KB of SRAM used by the audio co-processor".

Xbox One's APU is more expensive than PS4's APU.

I prefer $499 Steam machine from iBuyPower that includes Radeon HD R9-270 (20 CU at 925Mhz and 179 GB/s GDDR5 2GB).

Avatar image for BattlefieldFan3
BattlefieldFan3

361

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#77 BattlefieldFan3
Member since 2012 • 361 Posts

Every xboner in this thread saying that poor performance from the Xbone comes from poor API drivers is riding Ballmer's nutsack.

The current APIs aren't poor and won't be improving in the future. That's just a PR excuse from Microsoft to avoid saying the more crushing thing: That the PS4 is much more powerful than the Xbone. The only reason why devs are saying that the Xbone is "difficult" to develop for is that they have trouble deciding which parts of their game they should store in the eSRAM and which parts they should store in the slow DDR3 RAM. They aren't talking about coding difficulties like they had with the PS3.

Avatar image for Jamex1987
Jamex1987

2187

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#78  Edited By Jamex1987
Member since 2008 • 2187 Posts

DDR5 from DDR3 isn't going to make a noticable difference in anything. Like DDR2 to DDR3 barely made a difference. Computers today are DDR3.

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#79  Edited By tormentos
Member since 2003 • 33798 Posts

@Thunder7151 said:

ESRAM has much lower latency and higher bandwidth than GDDR5.

Latency affect CPU is basically a non issue for GPU.

Also you are comparing it to GDDR5 on PC,which is use for video memory,on PS4 the memory controller is different is not the same,as the one use for PC,funny how the xbox one CPU is actually faster than the PS4 one,1.7,yet the PS4 beat the xbox one in CPU test done,how is that possible if the PS4 CPU has more latency do to using Gddr5.?

ESRAM help the xbox in some cases in other it will not,reason why no FPS runs at even 900p on xbox one,anything that doesn't fit on those 32MB of ram will cause problems,also the actual bandwidth the real one achievable is lower than the PS4 one.

@ronvalencia said:

At 1080p, most games are not ROPS limited i.e. games such as COD Ghost and Battlefield 4 are mostly CU bounded. A CU includes ALUs and TMUs. ROPS includes memory writes.

1.3 TFLOPS GCN (prototype-7850) with 12 CUs + 32 ROPS was proven to be inferior to 1.76 TFLOPS retail GCN (7850) with 16 CUs + 32 ROPS. Both prototype-7850 and retail 7850 has the same 153.6 GB/s memory bandwidth.

Xbox One with 32 ROPS would not change the current situation.

The xbox one doesn't have a prototype 7850 it has a Bonaire GPU period confirmed by MS,and Bonaire is 16 ROP,you want to make seen like MS stripped half the ROP for no reason at all other than you thinking that they are unnecessary,no is not like that the xbox one has a Bonaire GPU that has 16 ROP period.

@04dcarraher said:

ROPS make a difference and so affect the gpu's ability in the end results..

ROPS are responsible for final pixel output, having 16 ROPs definitely puts the Xbox One at a disadvantage many pc gpu's and PS4 have 32,, The difference in raw shader performance (12 CUs vs 18 CUs) can definitely be a problem in games that run more complex lighting routines and other shader intensive jobs on each pixel, but all of the resolution differences between Xbox One and PS4 games at launch are likely the result of being ROP bound on the X1 along with less processing power. This is probably why Microsoft claimed it saw a bigger increase in realized performance from increasing the GPU clock from 800MHz to 853MHz. The ROPs operate at GPU clock, so an increase in GPU clock in a ROP bound scenario would increase performance more than adding more CU's.

But the 7770 can do 1080p, with a 72GB/s bandwidth.

http://www.anandtech.com/bench/product/777?vs=776

I don't think the 7770 need 32 ROP,but the 7790 probably does it has higher TF count than the 7850 yet it performs like 20% slower.

I think that if MS would have gone with GDDR5 they would be hitting now 900p at least with the same quality of the PS4,instead of 720p.

Look at that benchmark i posted the 7770 is very capable of pulling 1080p and not only in Racing games,also on FPS.

I think ESRAM is the real problem here,alto i have say many times that the xbox one has less usable power than the 7770 has,actually 100Gflops less of usable power,in the end less power is less power,and ESRAM was hinted since before launch to be a pain in the ass of developers,the xbox one is under performing under its own spec.

Don't read to much into that bold part,MS claimed they saw a bigger gain by having a clock increase,fact is it would have been impossible for them to do anything else to bump the spec without delaying or actually facing a horrible shortage of epic proportions,the xbox one GPU has 14 CU 2 are disable for yields,if MS would have let those 2 on,they would have a massive shortage,because every single GPU with less than 14 CU would have been thrown away.

So they claimed that they saw bigger gain from the up clock,but on real life every one knows 2 more CU would have help the xbox one more,no matter what the 7790 with 14 CU beat the 7770 with 10 CU regardless of the 7790 been more ROP restricted.

That link i posted prove that real easy,the gain from having extra CU would have been better,MS was just damage controlling how they best can.

@navyguy21 said:

Right now? Yes, absolutely.

Down the road? Who knows, its all about the game and/or the engine.

PS3 was also the more powerful console but we rarely saw that in action.

Im just saying that raw comparisons are misleading. We all know the PS4 is more powerful, it would be silly to deny.

All im saying is the difference isnt as huge as cows are saying, or as small as lems are saying. Its up to the devs to determine.

The PS3 was a pain in the ass to code for,and had an exotic CPU,that is not the case with the xbox one.

Miss leading how so.?

What 1080p vs 720p is not huge.? Since when is not huge,that sort of thing happen on PC and the one with 1080p will be crown the undisputed graphics king,hell on PC GPU are crown like graphics king by just doing 10 FPS more across the board is most games,even less in many cases.

If the R290 was able to beat the Titan by doubling in resolution,the hardware sites would be in arms now proclaiming the R290 and the supreme champ of GPU.

The xbox one can only do 1080p on games that are not that demanding,on anything else it just plain doesn't.

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#80 tormentos
Member since 2003 • 33798 Posts

@04dcarraher said:

You dont think PS4 gpu has no allocated reserved itself? Also even with a 10% allocation, the X1 still has about 1.2 TFLOP, which is 66% of 1.8 TFLOP, which means a 34% gap in total gpu processing power. and it shows with games like BF4 only seeing roughly 36% difference in resolution 720p vs 900p or even AC4 900p vs 1080p. Also the difference in texture fillrate between the two gpu's is only 29%.

Where are you getting the X1 having the OS or kinect eating into it? Both the X1 and PS4 OS allocate two cpu cores and the PS4 OS uses more memory then MS's.

The memory bandwidth on the PS4 has no real gains over DDR3 since the memory is just for cache where the PS4's advantage lies with its gpu being less then 40% stronger overall.

Let me explain again where the notion of 50% comes from..

The xbox one with 10% GPU reservation is 1.18TF not 1.2TF.

The PS4 is 1.84TF.

1840 - 1180 = 660Gflops.

Now 660Gflops is more than half the total power of the xbox one.

660 X 2 =1320 Gflps

The difference between the PS4 and xbox one is 660Gflops and that is more than 50% the xbox one total power.

And once again we have not a single source claiming the PS4 has 10% GPU reservation,the PS4 doesn't have kinect,and doesn't have snap + a metro like accelerated UI.

Even if the PS4 has a GPU reservation it would not be even close to 10%,hell not even 5% because the PS4 is not trying to be a cable box with a camera for remote and PiP.

Yeah use BF 4 and forget Ghost that is 720p on xbox one,just like BF4 is while the PS4 version is 1080p,in fact the only game that is 900p on PS4 is BF4,on xbox one DR3,Killer Instinct are also 720p.

So what the gap for 1080p vs 720p.?

You also forget that while BF4 is 900p on PS4 it runs 10FPS faster most of the time than the xbox one version.

That bold part is funny every one knows the xbox one uses GPU resources both for its UI which is part of the OS,and for Kinect,it was stated by MS the reservation is for snap,Kinect and the xbox one UI,snap requires GPU time and Kinect does compute on the GPU,but that is not all you know the sound chip shapes.? Most of it is also for Kinect and very little for developers.

Yeah that most be why GPU makers dropped DDR3 in favor of GDDR5 because there is no difference...lol

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#81  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:


@ronvalencia said:

At 1080p, most games are not ROPS limited i.e. games such as COD Ghost and Battlefield 4 are mostly CU bounded. A CU includes ALUs and TMUs. ROPS includes memory writes.

1.3 TFLOPS GCN (prototype-7850) with 12 CUs + 32 ROPS was proven to be inferior to 1.76 TFLOPS retail GCN (7850) with 16 CUs + 32 ROPS. Both prototype-7850 and retail 7850 has the same 153.6 GB/s memory bandwidth.

Xbox One with 32 ROPS would not change the current situation.

The xbox one doesn't have a prototype 7850 it has a Bonaire GPU period confirmed by MS,and Bonaire is 16 ROP,you want to make seen like MS stripped half the ROP for no reason at all other than you thinking that they are unnecessary,no is not like that the xbox one has a Bonaire GPU that has 16 ROP period.

AMD GCN's CU design are same across different GCN ASIC i.e. besides the 64bit DP FP, most of the GCN's instruction latency cycles are the same across different GCN ASICs.

The existence of 7970's 32 ROPS (which uses the same design as PS4's 32 ROPS) and it's gaming result debunks any arguments for 16 ROPS being limited for 1080p render target.

Unlike you, I don't look at higher level SKU code names.

@tormentos said:

hat 1080p vs 720p is not huge.? Since when is not huge,that sort of thing happen on PC and the one with 1080p will be crown the undisputed graphics king,hell on PC GPU are crown like graphics king by just doing 10 FPS more across the board is most games,even less in many cases.

The difference between 720p and 1080p is smaller than 1920x1080p and 5760x1080p.

Furthermore, the difference between X1 and PS4 is not just 720p vs 1080p i.e. Xbox One's rendering targets floats around 720p, 900p and 1080p.

@tormentos said:

If the R290 was able to beat the Titan by doubling in resolution,the hardware sites would be in arms now proclaiming the R290 and the supreme champ of GPU.

This proposition doesn't exist. GK110 has it's own advantages over R9-290X i.e. superior TMU count.

@tormentos said:

Yeah that most be why GPU makers dropped DDR3 in favor of GDDR5 because there is no difference...lol

You are forgetting PC GPU vendors rarely include large ESRAM block with their PC SKUs.

AMD rather include a small/fast GDDR5 memory pool in-place of a large ESRAM block e.g. AMD's own dGPU embedded solution has MCM (multi-chip-module) GDDR5 on the chip package. AMD can't stop MS for being stupid with their transistor budget.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#82  Edited By ronvalencia
Member since 2008 • 29612 Posts
@Jamex1987 said:

DDR5 from DDR3 isn't going to make a noticable difference in anything. Like DDR2 to DDR3 barely made a difference. Computers today are DDR3.

That's GDDR5 i.e. "quad pump" DDR3 that was designed against Rambus (another "quad pump" memory type).

Going beyond 1080p, GDDR5 makes a difference for my either 7950 (950Mhz) and R9-290 (947 Mhz).

I have tested my 7950 (950Mhz) and down clock it's memory speeds to DDR3-2400 levels and Crysis 3 is still playable at 1080p, but it's not going to win any benchmarks scores.

On the gaming PCs, highest benchmarks scores means GDDR5 i.e. what ever it takes to win a benchmark score.

Avatar image for iambatman7986
iambatman7986

4650

Forum Posts

0

Wiki Points

0

Followers

Reviews: 15

User Lists: 0

#83 iambatman7986
Member since 2013 • 4650 Posts

Why make it more complicated by adding ESRAM? Just use the 8GB of Gddr 5 and be done with it. PS3 was complicated and you saw it on multiplats, this gen, the One is weaker and harder to develop for. Not sure I get what MS was going for with the hardware design.

Avatar image for Tighaman
Tighaman

1038

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#84 Tighaman
Member since 2006 • 1038 Posts

@stereointegrity: why does everything I say have to be from that site I told you I like to read way way way beyond these forums this right here is for shits and giggles because FANBOYS on this site just wash and repeat what helps their agenda but don't read the flaw of their agenda or the advantages of the opposition many wars lost that way.