Found some info, may or may not help explain the difference.
Â
http://semiaccurate.com/forums/showthread.php?t=7231
Â
It is definitely going to be an interesting development.
March 2012 an article talking about how AMD was putting PRT into their cards
Quote:"but will that be exposed as a new API or as a new texture fetch command for GLSL? I would bet on the API solution, making this fixed per texture is probably easier to implement. But if the pipeline stalls anyway, using the MMU for large data could also be interesting for other buffers beside textures
NVidia has also announced MMUs in there next GPU generation Kepler but has given even fewer details about that.
I hope that the long wait for the OpenGL extension is a good sign and AMD is already talking to NVidia to create a unified extension, the Kepler GPUs are just around the corner"
They have definitely been dealing with this ideal of PRT for a few years and from what people have been saying is that the development of the XB1 was only going back like 2 or so years ago.
In this video https://www.youtube.com/watch?v=EswYdzsHKMc (only 3 mins) he clearly mentions that it is for Intel, AMD & NVidia and specifically only says next gen console (doesn't say next gen console(s) )
What we need to remember is he is talking about PRT being at a Hardware level now and not just at a software level (which can be done anywhere except way less efficient)
Then there is this video http://channel9.msdn.com/Events/Build/2013/4-063
This goes for a lot longer but PLEASE before ANYONE posts on this thread please spend the time and watch it or please don't post. It answers so many questions people keep on trying to say otherwise in these threads.
Also on this video he mentions next gen console (not consoles)
Then there is this company called Graphine which makes middleware software 'for game developers' he is in the longer video also and said their system to make games into PRT & old shader type is relatively painless (so no massive development times to change games to take advantage of Hardware PRT and if there is no Hardware PRT it uses the Normal Rendering methods)Â http://graphinesoftware.com/Announcing-Granite-2.0Â
From their website posted just above
"Hardware tiled resources offer several improvements over the shader based method available in Granite 1.8. Especially when using high-quality filtering modes such as full anisotropic trilinear filtering with a high degree of anisotropy (>= 8x) there are clear advantages. Firstly, the shader can be simplified, reducing the instruction count from around 30 to 10-15 shader instructions in the case of anisotropic filtering. Secondly, since no overlap needs to be used on tile borders cache use and compression can be improved. Finally, streaming throughput is improved by 33% as no mipmaps have to be generated for uploaded tiles"
It also goes on to say
"Of course Granite 2.0 still has full support for shader emulation on older API´s and hardware. This makes using tiled resources in multi-platform games or engines very easy. If there is hardware Granite will use it, if not it will automatically fall back to a shader based implementation."
So I think MS has a hardware implementation of the PRT (as evident by the reviews of the 7790) and I think even though sony may be able to do it (it would be software based and not take advantages of the improvements noted above)
*Also at 29 minutes and 45 seconds in the longer video with the Middleware company (Graphine Software) he talks about what is needed to really work well with Tiled Resources.
*Minimize Latency
*Minimize Texture Cache Size
*Minimize Storage Size
*Minimize Production Overhead
*Maximize Uniquie Texture Data
So this is my take on it
Sony has chosen a method that is 'currently' employed with games, standard rendering, standard use of textures that take up a lot of space (hence the need for high bandwidth), where as I believe MS has gone for the forward thinking approach which is what AMD has been wanting for many years with its Hardware PRT, because with Hardware PRT, smaller textures the issue then becomes not the bandwidth (because file size is SIGNIFICANTLY smaller, ie only needing 16mb for a 1920x1080 resolution, but latency is then the single most limiting factor) which would explain the use of DDR3 ram, move engines and Low latency eSram.
The key will definitely be if the developers take advantage of it, but given that from over 2 years ago AMD has been creating this as an open standard, getting Intel & NVIDIA on board (as they are all moving towards having on die cache) I think developers will (if not already) be right on board with it.
http://www.neogaf.com/forum/showthread.php?t=458866
Have a look at that slide directly from AMD (besides John Carmack saying he will implement it in Doom 4) but
*Texture Sizes up to 32GB
*Expected to feature in next gen game engines (this is a biggy, nearly all game designers are spending more time with AMD now)
*Notice the bottom bit of the Slide (AMD Radeon HD 7900 Series) doesn't say 7000 series!!!
Log in to comment