@Shewgenja said:
@primorandomguy said:
@Shewgenja: Can't take the heat, get out of the kitchen bitch. You're always dogging on MS and Xbox One, but now that you see the PS Pro is absolute shit and a half assed piece of crap, you have meltdowns. Scorpio is going to run circles around your precious PS4, OG or Pro, and you're going to cry and have a meltdown every step of the way.
Oh, nope. It was more of a fair warning. Now, I hope Scorpio is a proper Gen 9 console with a fresh software lineup and renewed interest from MS in making exclusives. That would be for the absolute best in my opinion.
However, if all Scorpio is going to be is a peen waving fest over multiplats, I will rain on the parade in ways that make 2013 look like a legendary dream realm. I don't get mad. I get even.
GPU shader power: Scorpio is 1.03X faster than RX-480
Memory bandwidth: Scorpio is 1.25X faster than RX-480
Vega's tile/polygon binning cache rendering will be very important for Nvidia Gameworks titles.
On basic parameters in both shader ALU and memory bandwidth, Vega 10 is slightly 2X over RX-480 i.e.
double RX-480's 46.78 fps to 92 fps
double RX-480's 49 fps 98 fps.
This is with Fury X drivers with no support for Vega's tile/polygon binning cache rendering. Vega 10 estimated to be faster than GTX 1080
http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser/2
ROPs & Rasterizers: Binning for the Win(ning)
We’ll suitably round-out our overview of AMD’s Vega teaser with a look at the front and back-ends of the GPU architecture. While AMD has clearly put quite a bit of effort into the shader core, shader engines, and memory, they have not ignored the rasterizers at the front-end or the ROPs at the back-end. In fact this could be one of the most important changes to the architecture from an efficiency standpoint.
Back in August, our pal David Kanter discovered one of the important ingredients of the secret sauce that is NVIDIA’s efficiency optimizations. As it turns out, NVIDIA has been doing tile based rasterization and binning since Maxwell, and that this was likely one of the big reasons Maxwell’s efficiency increased by so much. Though NVIDIA still refuses to comment on the matter, from what we can ascertain, breaking up a scene into tiles has allowed NVIDIA to keep a lot more traffic on-chip, which saves memory bandwidth, but also cuts down on very expensive accesses to VRAM.
For Vega, AMD will be doing something similar. The architecture will add support for what AMD calls the Draw Stream Binning Rasterizer, which true to its name, will give Vega the ability to bin polygons by tile. By doing so, AMD will cut down on the amount of memory accesses by working with smaller tiles that can stay-on chip. This will also allow AMD to do a better job of culling hidden pixels, keeping them from making it to the pixel shaders and consuming resources there.
As we have almost no detail on how AMD or NVIDIA are doing tiling and binning, it’s impossible to say with any degree of certainty just how close their implementations are, so I’ll refrain from any speculation on which might be better. But I’m not going to be too surprised if in the future we find out both implementations are quite similar. The important thing to take away from this right now is that AMD is following a very similar path to where we think NVIDIA captured some of their greatest efficiency gains on Maxwell, and that in turn bodes well for Vega.
Meanwhile, on the ROP side of matters, besides baking in the necessary support for the aforementioned binning technology, AMD is also making one other change to cut down on the amount of data that has to go off-chip to VRAM. AMD has significantly reworked how the ROPs (or as they like to call them, the Render Back-Ends) interact with their L2 cache. Starting with Vega, the ROPs are now clients of the L2 cache rather than the memory controller, allowing them to better and more directly use the relatively spacious L2 cache.
Log in to comment