This topic is locked from further discussion.
[QUOTE="lx_theo"][QUOTE="StormyJoe"]Sounds like the Cell. Seems like Sony learned their lesson. How you got to this conclusion completely baffles me. He means it's basically the reverse of this (7th) gen, where PS3 had theoretical advantage in power that was much harder to tap into while the 360 had the developer friendly approach with some parts being better than the PS3. I don't think it's quite right but I can see why people would think like that.So, he basically said that if you know what you are doing, the XB1 architecture is actaully faster, but you have to take that learning curve into account. Whereas in the PS4's architecture, it is easier to fully utilize the hardware of the platform from the get go.
A lot of cows are going to be upset about that.
FPSfan1985
Didn't understand shit ^[QUOTE="Rocker6"]
Ouch, see cows are busy with the damage control. The idea X1 has a comparable, and in some situations even better RAM than the PS4 stings after all that "GDDR5" nonsense, huh? ;)
Douevenlift_bro
Â
Lol lemmings...
Second time I was called a lemming today despite never owning a MS system... this is progressing nicely...
---------------
Now, could you explain what part of my post is wrong?
Are you saying the X1 and the PS4 ultimately don't have comparable RAM?
Or are you denying the part from the video how X1 memory design can be more efficient in some cases, although it's generally harder to work with?
Didn't understand shit ^[QUOTE="Douevenlift_bro"]
[QUOTE="Rocker6"]
Ouch, see cows are busy with the damage control. The idea X1 has a comparable, and in some situations even better RAM than the PS4 stings after all that "GDDR5" nonsense, huh? ;)
Rocker6
Â
Lol lemmings...
Second time I was called a lemming today despite never owning a MS system... this is progressing nicely...
---------------
Now, could you explain what part of my post is wrong?
Are you saying the X1 and the PS4 ultimately don't have comparable RAM?
Or are you denying the part from the video how X1 memory design can be more efficient in some cases, although it's generally harder to work with?
Cuz only a braindead lemming would misquote him for saying 8GB GDDR5 is worse when he CLEARLY said it is advantageous.if you're not a lemming, then adopt a brain and don't act like one ;)
[QUOTE="StormyJoe"][QUOTE="DrTrafalgarLaw"] But you need to put in a lot more work, drive the costs up and it produces much more heat. It's not worth it for having a theoretical bandwidth you're never going to hit. show me one game that needs 1000+ GB/s of bandwidth to run. The XB1 has an already gimped memory set-up that can only be up to par by using that esram efficiently. The esram is going to be the cell of this generation, third parties won't care developing/coding for it.DrTrafalgarLaw
Or, as with most MS's dev tools, the SDK makes it easy to do.
It's still an overhead for developers that have never worked with esram before, doesn't matter how good the tools are. Microsoft might've had an edge with eDram but they cheaped out an spend the budget on Kinect 2.0. Don't forget the gimped GPU only capable of 1.1 TFLOPs. Its not really overhead, its a faster type of memory, so its either dealt with automatically like cache, or use it manually which is a simple memcpyI could listen to him talk all day. Sony should use him more.Xaero_GravityYes... very interesting stuff.
[QUOTE="lx_theo"][QUOTE="FPSfan1985"] He's not wrong he's just over playing the complexity needed to take advantage of the on chip ram. The same amount of complexity will be needed for the gddr5 in the ps4. Which is nothing when compared to the complexities of the cell.FPSfan1985For one, he's talking about architecture choice. He literally says that the architecture choice means that it will be less complex. Two, it still doesn't stop it from sounding like the cell and being a similar concept, like I said. Maybe to someone who doesn't understand anything about programming. Yep, Cerny knows nothing about it.
Maybe to someone who doesn't understand anything about programming. Yep, Cerny knows nothing about it. You made the comparison to the cell, not him.[QUOTE="FPSfan1985"][QUOTE="lx_theo"] For one, he's talking about architecture choice. He literally says that the architecture choice means that it will be less complex. Two, it still doesn't stop it from sounding like the cell and being a similar concept, like I said.lx_theo
[QUOTE="lx_theo"]Yep, Cerny knows nothing about it. You made the comparison to the cell, not him. Ah, you meant that.[QUOTE="FPSfan1985"] Maybe to someone who doesn't understand anything about programming.FPSfan1985
Okay, well you're just an idiot, then. Again, more difficult to program effectively for but potentially better performance is still a concept that applies to both.
If you can prove that both having that trait does not make them similar, then you may not be an idiot. Remember, I'm not saying the scale is the same. I'm not saying it will effect the system as much. I'm not commenting on how used to it developers are. I'm saying they both are similar.
And since we both know you can't do that, have a good day, sir. I have places to be.
[QUOTE="Rocker6"]
[QUOTE="Douevenlift_bro"]Didn't understand shit ^
Â
Lol lemmings...
Douevenlift_bro
Second time I was called a lemming today despite never owning a MS system... this is progressing nicely...
---------------
Now, could you explain what part of my post is wrong?
Are you saying the X1 and the PS4 ultimately don't have comparable RAM?
Or are you denying the part from the video how X1 memory design can be more efficient in some cases, although it's generally harder to work with?
Cuz only a braindead lemming would misquote him for saying 8GB GDDR5 is worse when he CLEARLY said it is advantageous.if you're not a lemming, then adopt a brain and don't act like one ;)
Where did I say PS4's RAM is worse? :?
I said it's comparable to the one in X1. While X1 may have a slight advantage in some cases, the downside is, it's harder to work with. Ultimately, things balance themselves out, making PS4 and X1 RAM roughly equal.
Bottom line is, as far as RAM goes, both consoles are nicely covered. That's my point.
Translation: Sony wouldn't be capable of making that fancy programming, so they went the cheap off-the-shelf route with one GDDR5 pool :lol::lol: stupid lems in this thread are thinking M$ used EDRAM? Dat 1TB/s transfer rate? :lol: Dumbasses, M$ cheaped out and went with ESRAM which isn't even close to 1TB/s. Cerny clearly states that the 8GDDR5 is better because it is more readily available and you don't need to create fancy programming to take advantage of its fast speeds.
I_can_haz
[QUOTE="I_can_haz"]Translation: Sony wouldn't be capable of making that fancy programming, so they went the cheap off-the-shelf route with one GDDR5 pool :lol:Wow, you're a tool. :lol: You don't even understand how memory works do you?:lol: stupid lems in this thread are thinking M$ used EDRAM? Dat 1TB/s transfer rate? :lol: Dumbasses, M$ cheaped out and went with ESRAM which isn't even close to 1TB/s. Cerny clearly states that the 8GDDR5 is better because it is more readily available and you don't need to create fancy programming to take advantage of its fast speeds.
RR360DD
Â
Â
Fast readily available RAM>>>>>theoretically fast RAM iunder certain conditions.
Xbox One has more bandwidth (combining esram and memory bandwidth it's quite a bit more) PS4 has a much better GPU Multipats will look the same Exclusives will depend on how good developers are at making the most of what they have, same as any other generation. I wonder if Microsofts claims of optimising the use of memory means they already know how to make use of more complex setup. Isn't it similar to Xbox 360 so they have the experience.tdkmillsy
Exactly, its similar to the 360s architecture.
And don't forget dat clowd
Translation: Sony wouldn't be capable of making that fancy programming, so they went the cheap off-the-shelf route with one GDDR5 pool :lol:Wow, you're a tool. :lol: You don't even understand how memory works do you?[QUOTE="RR360DD"][QUOTE="I_can_haz"]
:lol: stupid lems in this thread are thinking M$ used EDRAM? Dat 1TB/s transfer rate? :lol: Dumbasses, M$ cheaped out and went with ESRAM which isn't even close to 1TB/s. Cerny clearly states that the 8GDDR5 is better because it is more readily available and you don't need to create fancy programming to take advantage of its fast speeds.
I_can_haz
Â
Â
Fast readily available RAM>>>>>theoretically fast RAM iunder certain conditions.
Only for chump devs who can't find their own ass with both hands
The best devs out there will make good use of it, thanks to MSs superior dev tools :cool:
Wow, you're a tool. :lol: You don't even understand how memory works do you?[QUOTE="I_can_haz"]
[QUOTE="RR360DD"] Translation: Sony wouldn't be capable of making that fancy programming, so they went the cheap off-the-shelf route with one GDDR5 pool :lol:RR360DD
Â
Â
Fast readily available RAM>>>>>theoretically fast RAM iunder certain conditions.
Only for chump devs who can't find their own ass with both hands
The best devs out there will make good use of it, thanks to MSs superior dev tools :cool:
http://www.eurogamer.net/articles/digitalfoundry-xbox-one-memory-better-in-production-hardwareXanderZaneInteresting point about the custom audio chips taking load away from CPU in Xbox One. Looking more like CPU performance better in Xbox One and GPU better in PS4. Isn't this the opposite to Xbox 360 and PS3?
[QUOTE="ShadowDeathX"]
Starts at 39 Minutes.
http://youtu.be/JJW5OKbh0WA?t=38m55s
StormyJoe
So, he basically said that if you know what you are doing, the XB1 architecture is actaully faster, but you have to take that learning curve into account. Whereas in the PS4's architecture, it is easier to fully utilize the hardware of the platform from the get go.
A lot of cows are going to be upset about that.
i think is not that hard for developers to learn that..considering the architecture is still quite similar to that of the xbox 360 GPU + eDRAM.Cheap ass MS couldn't even use the "EDRAM" referred to in this video. They have ESRAM and bonehead lemmings are still rejoicing :lol:
Cheap ass MS couldn't even use the "EDRAM" referred to in this video. They have ESRAM and bonehead lemmings are still rejoicing :lol:
Douevenlift_bro
That's because they were so damn worried about Kinect and poured console budget into it.
While Sony dumped everything into the console and made just for games.
[QUOTE="superclocked"]40:45 - He says that a large cache, like that in the XBox One, would actually allow developers to fully utilize the hardware, and that GDDR5 is just easier to for developers to use.. So, Sony just said that Micrsoft's approach to the RAM is actually better for performance :oStormyJoe
Yes he did. Sony went for an achitecture that was the easiest to develop for in their minds.
And people will ignore this.[QUOTE="Nonstop-Madness"]The Xbox One's eSRAM doesn't have a bandwidth of 1TBps; not even close. tdkmillsyProof??? dont feed this troll
[QUOTE="XanderZane"]http://www.eurogamer.net/articles/digitalfoundry-xbox-one-memory-better-in-production-hardwaretdkmillsyInteresting point about the custom audio chips taking load away from CPU in Xbox One. Looking more like CPU performance better in Xbox One and GPU better in PS4. Isn't this the opposite to Xbox 360 and PS3?Funny you say that when PS4 has a separate chip to handle UI and Xbone still uses CPU for UI. Fact of the matter is that nobody knows shit about CPU performance at this point and you can't come up with conclusions based on an article.
Interesting point about the custom audio chips taking load away from CPU in Xbox One. Looking more like CPU performance better in Xbox One and GPU better in PS4. Isn't this the opposite to Xbox 360 and PS3?Funny you say that when PS4 has a separate chip to handle UI and Xbone still uses CPU for UI. Fact of the matter is that nobody knows shit about CPU performance at this point and you can't come up with conclusions based on an article.On the contrary, nearly everyone in System Wars is coming up with conclusions based on articles that they've read. To be perfectly honest, both systems have their advantages.. The PS4's GPU has more shader processing power, more compute units, custom hardware to improve compute performance, more RAM available, and a RAM setup that is easier to develop for.. The XBox One will use less memory bandwidth, due to it's custom texture compression hardware and embedded eSRAM, it has a custom audio chip, the cloud is being used to control the AI in XBox One games, and it has a ton of non gaming features. The XBox One also has Kinect, which takes us one step closer to complete gaming immersion in a 3D world...[QUOTE="tdkmillsy"][QUOTE="XanderZane"]http://www.eurogamer.net/articles/digitalfoundry-xbox-one-memory-better-in-production-hardwareI_can_haz
Oh man Cerny made a whole cadre of sheens last gen sad...but none of those are left the entire species has flipflopped.
Id post a list of the things sheens complained the 360 did and distoled that the ps3 did and which have now done a 180 on but ive posted it before.
[QUOTE="StormyJoe"]
[QUOTE="Douevenlift_bro"]You just don't get it do ya?
Douevenlift_bro
Apparently you don't. It wasn't like he was speaking latin...
He said at the end of the day 176GB/S is faster because it's readily available.Why on earth do you wanna spin that?
That's NOT what he said. He said that it is easier for Devs to get usage out of it unless they are familiar with that type of architecture,
40:45 - He says that a large cache, like that in the XBox One, would actually allow developers to fully utilize the hardware, and that GDDR5 is just easier to for developers to use.. So, Sony just said that Micrsoft's approach to the RAM is actually better for performance :osuperclocked
Â
Nice way of completely missing the point.
Â
The way they were going to do it was using 128 bit bus,at 88GB/s and bring the performance up with EDRAM to 1TB per second,it would have been a combination of GDDR5+ edram not DDR3 + ESRAM.
Â
By the way beter utilize your hardware means nothing when your hardware is under power,no matter how you slice it ESRAM will not inject 700Gflops into the xbox one to close the gap.
Â
The xbox one can have a 300GB/s bandwidth with 1.2TF it means nothing,the buss will never ever be saturated by such a weak GPU,in fact look at the bandwidth of the 7770 for reference which has 1.27 TF,and look at teh bandwidth of the 7790 bonaire which is stronger than the xbox one..
Â
The 7770 has 75Gb/s bandwidth.
The 7790 has 98GB/s bandwidth.
Â
So yeah having 300GB/s bandwidth with such a weak GPU means nothing,the 7850 and 7870 are not bandwidth starved,so the PS4 will the 176GB/s it has has more than enough bandwidth to take advange of its GPU without making things complicated.
Â
[QUOTE="ShadowDeathX"]
Starts at 39 Minutes.
http://youtu.be/JJW5OKbh0WA?t=38m55s
StormyJoe
So, he basically said that if you know what you are doing, the XB1 architecture is actaully faster, but you have to take that learning curve into account. Whereas in the PS4's architecture, it is easier to fully utilize the hardware of the platform from the get go.
A lot of cows are going to be upset about that.
Â
The comparison he put up on the screen was:
256-bit bus/GDDR5=176 GB per secondvs.128-bit/GDDR5 88 GB per second + eDRAM 1000 GB per second=1088* GB per second
If I'm not mistaken, the Xbox One is using GDDR3 and eSRAM which isn't the same comparison.
[QUOTE="StormyJoe"]
[QUOTE="ShadowDeathX"]
Starts at 39 Minutes.
http://youtu.be/JJW5OKbh0WA?t=38m55s
richsena
So, he basically said that if you know what you are doing, the XB1 architecture is actaully faster, but you have to take that learning curve into account. Whereas in the PS4's architecture, it is easier to fully utilize the hardware of the platform from the get go.
A lot of cows are going to be upset about that.
Â
The comparison he put up on the screen was:
256-bit bus/GDDR5=176 GB per secondvs.128-bit/GDDR5 88 GB per second + eDRAM 1000 GB per second=1088* GB per second
If I'm not mistaken, the Xbox One is using GDDR3 and eSRAM which isn't the same comparison.
ddr3 and eSRAM yeah[QUOTE="StormyJoe"]
[QUOTE="ShadowDeathX"]
Starts at 39 Minutes.
http://youtu.be/JJW5OKbh0WA?t=38m55s
richsena
So, he basically said that if you know what you are doing, the XB1 architecture is actaully faster, but you have to take that learning curve into account. Whereas in the PS4's architecture, it is easier to fully utilize the hardware of the platform from the get go.
A lot of cows are going to be upset about that.
The comparison he put up on the screen was:
256-bit bus/GDDR5=176 GB per secondvs.128-bit/GDDR5 88 GB per second + eDRAM 1000 GB per second=1088* GB per second
If I'm not mistaken, the Xbox One is using GDDR3 and eSRAM which isn't the same comparison.
Yes, but utilizing available eDRAM will increase performance. So, the numbers floating around on XB1 aren't 100% accrurate because those numbers are not taking this into account.
[QUOTE="richsena"]
[QUOTE="StormyJoe"]
So, he basically said that if you know what you are doing, the XB1 architecture is actaully faster, but you have to take that learning curve into account. Whereas in the PS4's architecture, it is easier to fully utilize the hardware of the platform from the get go.
A lot of cows are going to be upset about that.
StormyJoe
Â
The comparison he put up on the screen was:
256-bit bus/GDDR5=176 GB per secondvs.128-bit/GDDR5 88 GB per second + eDRAM 1000 GB per second=1088* GB per second
If I'm not mistaken, the Xbox One is using GDDR3 and eSRAM which isn't the same comparison.
Yes, but utilizing available eDRAM will increase performance. So, the numbers floating around on XB1 aren't 100% accrurate because those numbers are not taking this into account.
What Cerny said, though, doesn't support your allegation that the Xbox One is actually faster than the PS4's setup. It may be faster than initially thought, but the comparison isn't as linear as in Cerny's presentation because of the different types of RAM being used.
Xbox One has learning curve on esRAM and PS4 has learning curve on high latency GDDR5.magicalclickThere are no latency issues associated with GDDR5 compared with DDR3. It's just a myth.
There is no latency difference between DDR3 and GDDR5 when you just look at the bare memory chips itself. The latency difference between DDR3 and GDDR5 comes from the different memory controllers and the different scenarios of usage in PCs. DDR3 is usally used for CPUs while GDDR5 is used for GPUs. The main task of GPUs is rendering which requires a lot of bandwidth. GDDR5 is a high performance RAM-type that is able to outperform DDR3 easily in a typical rendering scenario. To make sure that the maximum bandwidth is available most of the time, GPU memory controllers combine many memory accesses to bursts. This is bad for the latency and great for the bandwidth, but since GPUs don't need latency for rendering it doesn't affect performance. CPUs don't need a lot of bandwidth since they're dealing with computing. DDR3 delivers enough bandwidth for any modern day CPU, so GDDR5 would be overkill for computing. The computing tasks of a CPU are are extremely latency-critical. That's why memory controllers for CPUs work in the complete opposite way as GPU memory controllers. Instead of burst accessing the RAM, you'll make sure that every memory access can happen as immediate as possible. This will kill bandwidth but it will have a positive impact on latency which will eventually increase the computing performance of the CPU. What does this mean for the PS4? The PS4 uses a state-of-the-art heterogenous processor architecture from AMD (the so called "HSA") which combines CPU and GPU in one single chip. To ensure that such a heterogeneous processor can deliver maximum bandwidth for rendering and minimum latency for computing, AMD integrates a special DRAM controller. This DRAM controller allows the CPU memory controller to have low latency access while at the same time the GPU memory controller can burst access the RAM. That's why Sony can go for maximum bandwidth with one big GDDR5 RAM pool without having any headaches because of latency.
There are no latency issues associated with GDDR5 compared with DDR3. It's just a myth.[QUOTE="magicalclick"]Xbox One has learning curve on esRAM and PS4 has learning curve on high latency GDDR5.BigBoss154
Shhh lemmings like to make shit up.There is no latency difference between DDR3 and GDDR5 when you just look at the bare memory chips itself. The latency difference between DDR3 and GDDR5 comes from the different memory controllers and the different scenarios of usage in PCs. DDR3 is usally used for CPUs while GDDR5 is used for GPUs. The main task of GPUs is rendering which requires a lot of bandwidth. GDDR5 is a high performance RAM-type that is able to outperform DDR3 easily in a typical rendering scenario. To make sure that the maximum bandwidth is available most of the time, GPU memory controllers combine many memory accesses to bursts. This is bad for the latency and great for the bandwidth, but since GPUs don't need latency for rendering it doesn't affect performance. CPUs don't need a lot of bandwidth since they're dealing with computing. DDR3 delivers enough bandwidth for any modern day CPU, so GDDR5 would be overkill for computing. The computing tasks of a CPU are are extremely latency-critical. That's why memory controllers for CPUs work in the complete opposite way as GPU memory controllers. Instead of burst accessing the RAM, you'll make sure that every memory access can happen as immediate as possible. This will kill bandwidth but it will have a positive impact on latency which will eventually increase the computing performance of the CPU. What does this mean for the PS4? The PS4 uses a state-of-the-art heterogenous processor architecture from AMD (the so called "HSA") which combines CPU and GPU in one single chip. To ensure that such a heterogeneous processor can deliver maximum bandwidth for rendering and minimum latency for computing, AMD integrates a special DRAM controller. This DRAM controller allows the CPU memory controller to have low latency access while at the same time the GPU memory controller can burst access the RAM. That's why Sony can go for maximum bandwidth with one big GDDR5 RAM pool without having any headaches because of latency.
[QUOTE="StormyJoe"]
[QUOTE="richsena"]
The comparison he put up on the screen was:
256-bit bus/GDDR5=176 GB per secondvs.128-bit/GDDR5 88 GB per second + eDRAM 1000 GB per second=1088* GB per second
If I'm not mistaken, the Xbox One is using GDDR3 and eSRAM which isn't the same comparison.
richsena
Yes, but utilizing available eDRAM will increase performance. So, the numbers floating around on XB1 aren't 100% accrurate because those numbers are not taking this into account.
What Cerny said, though, doesn't support your allegation that the Xbox One is actually faster than the PS4's setup. It may be faster than initially thought, but the comparison isn't as linear as in Cerny's presentation because of the different types of RAM being used.
I never said the XB1 was faster. What I am saying is that the gap between the PS4 and XB1 is less than what people are saying.
[QUOTE="richsena"]
[QUOTE="StormyJoe"]
Yes, but utilizing available eDRAM will increase performance. So, the numbers floating around on XB1 aren't 100% accrurate because those numbers are not taking this into account.
StormyJoe
What Cerny said, though, doesn't support your allegation that the Xbox One is actually faster than the PS4's setup. It may be faster than initially thought, but the comparison isn't as linear as in Cerny's presentation because of the different types of RAM being used.
I never said the XB1 was faster. What I am saying is that the gap between the PS4 and XB1 is less than what people are saying.
No, it's exactly what people are saying. a PS2 VS XBOX type of differencePlease Log In to post.
Log in to comment