@tormentos:
PS4 titles such as Killzone: Shadow Fall are using 800MB for render targets alone.
At Microsoft’s BUILD event this year, the team showed the hardware based tiled resources support added in DX11.2.
3GBs of textures were able to be stored in 16MB of RAM.
Hardware tiled resources offers a bunch of improvements over the shader based method available in Granite 1.8. Especially when using HQ filtering modes such as full anisotropic trilinear filtering with a high degree of anisotropy there are clear advantages.
Firstly,the shader can be simplified, reducing the instruction count from around 30 to 10-15 shader instructions in the case of anisotropic filtering.
Secondly, since no overlap needs to be used on tile borders cache use and compression can be improved.
Finally, streaming throughput is improved by 33% as no mipmaps have to be generated for uploaded tiles.
The eSRAM is the dedicated hardware for tiled resources and DirectX 11.2 contains the APIs to take advantage of it. AS I’VE SAID BEFORE. -_-
Microsoft has implemented some APIs in DirectX11.2 so that developers don't have to utilize their own implementation from scratch.
The more exciting implications are the possibility of the combination of tiled textures and the cloud. Developers could go crazy since they wouldn't have to store these massive textures on a disc. They could either offer them as : I would imagine the possibility of actually streaming the tiles straight from the cloud in real time thanks to the LZ encode/decode capabilities of the data move engines straight to the eSRAM to be fed into the GPU.Using the cloud to process your procedural textures for free rather than depending on your CPU. This idea is amazing.
The eSRAM and data move engines are not simply a work around for bandwidth and to label it as such(which I have seen numerous times) is disingenuous to the X1's design. They have specifically equipped it with this for specific applications beyond mitigating bandwidth limitations. Tiled textures, shadow mapping, cloud offloading, cloud streaming and of course a very fast scratch pad for the GPU. Simply put, eSRAM is superior to both GDDR5 and DDR3 for certain applications. That's why it's there. Not just to boost bandwidth.
PS4 supports the hardware implementation of PRT and the difference would be that it would require part of its RAM and RAM/GPU bandwidth to emulate it while dealing with the GPU/RAM latency.
DirectX11.2 introduces new technology called Tiled Resources - essentially, what Tiled Resources does is it increases and decreases graphics fidelity based on location and what the player is actively viewing.
To simplify - imagine your house rendered as a video game. The room that you are in and rooms that are visible to you are rendered as usual, but as you approach something, it maintains the same quality because its render quality has been increased, whereas the objects you are moving away from, and the room you no longer occupy, have had their render quality decreased.
It's like an automatic light dimmer, for video games.
OpenGL has a very similar (yet not complete) plugin.
PS4 load on cpu and gpu causing optimization troubles that will only get worse as comlexity tries to increase.
Xbox One shines here as cpu cache/register/esram snoops allow computes to be offloaded and retrieved from specialized CU's seamlessly alleviating cpu/gpu load.
The 1.3 tflop do converting 6gb to 32mb and on call !
This is much better than 1.8tflop rendering full texture loads!
50% more TFlops does not necessarily mean that the PS4 will have 50% more graphics ability.
PS4 is stronger raw, hands down. This won't make any Xbox One fans happy but thats the truth of it, but it’s not really important. Now let’s look at diminishing returns of raw gpu performance. Once PS4 raw compute limit is reached games will cease to increase in detail, complexity and performance. 1.8Tflop today is nothing great. Especially for an off the shelf card. Thats why games are buggy atm. Thats why Crytek say that Cryengine could already max out console.
So why does a dev with a Gfx engine that can already max out next gen console go exclusive with a supposedly less powerful console.
No not because MS paid them. Cause’ that would be FANBOY LOGIC .
And the answer is because PS4 cant render it at stable FPS. Once PS4 hits performance cap there is no workaround, you just cant do anything about it. Xbox One performance cap reduced by factors of ( I DON’T EVEN FREAKING KNOW) by Tiled Resources. We don’t even know by how much yet as its just now being used by big Devs. Xbox One has low level API that runs seamlessly with High level DX11.2 and Tiled Resources. Game changer.
PS4 does not have access and they are using OpenGL to try and duplicate but nowhere close to what Xbox One does with it.
32mb=6+GB of texture on call. Think about the imlications of this number for a minute when it starts getting used widespread and efficiently.
When MS said they will let the games do the talking they meant it because once more games launch using this resource there will be no denying the proof. Ryse is just the first wave.
Now what happens when an 1.8 tflop machine runs out of headroom to render and a 1.3tflop(raw) can convert GB's of data on the fly only using MB's of memory address?
MS didn’t care about GDDR5 as its cost put too much loss on Xbox One sales when Tiled Resources(referred to as TR from this point on) significantly changes the game.
host/guest gpus : It is R280x based, but with DDR3 mem won’t hit the Tflop number. They just wanted the processing power of the GPU and the latency of DDR3. Used in conjunction with TR it’s crazy.
An 8GB texture render takes 100% GPU load on ps4 while TR on XB1 can do that with 48mb when devs utilize it. Ryse is just the beginning!
http://www.youtube.com/watch?feature=player_embedded&v=EswYdzsHKMc Watch this please. It’s pretty cool.
When Devs start utilizing TR to it’s full potential when that 1.8 PS4 has reached it’s computational limit because of memory space and compute the Xbox One can render exponentially more data and exponentially more fidelity because it can take far more data and render it using only a fraction of memory. 32mb-->rendering 6gb and more!
Microsoft created a hardware accelerated version of an existing technique, though it offers a lot of improvements because now DirectX takes care of a lot of the problems that programmers used to have to deal with in order to implement Tiled Resources (mainly blending issues).
How many Tflops does it take to call on 32mb? So not only is gpu load significantly less but that means that it can call on significantly more data using far less gpu resources.
http://www.youtube.com/watch?v=QB0VKmk5bmI&feature=player_detailpage. Watch this please. Yet again. COOL !
So that is a one of the many tech inside that will separete Xbox One from PS4. Developers still do not know how powerfull this tech is because that could not be calculated with RAW TFs numbers. The hardware argument used around most of the net is completely moot, as it only considers the raw gpu specs, and not the APU as a whole, and both systems are running custom APU with key differences.
The important thing is both will have the same type of technique but one will have more renewed version.
But the part that is interesting is the Cloud. If this could happen then hardware would become less and less relevant. See this as something good.
Not because you want to glorify your console.
Comparing a Multigenerational game like BF4 or COD: Ghosts is irrelevant because it is most likely not using that technique due to the games also running on PS3 and Xbox 360. Those consoles are not using this. It’s poorly optimized on all platforms not just Xbox ONE. Exclusives on PS4 and Xbox One would rely more on the technique.
If you are going on about 1080p then you are easily IMPRESSED. I was expecting 4K.
But it isn’t important to me.
Anyway both consoles are good.
So why are you downplaying something like this, when it’s good news.
Log in to comment