@gamecubepad:
1. Supersampling is essentially the same exact thing as running 4K in DSR or VSR mode on a 1080p screen. Your "looks FAR better than 4k" claim is bullshit. It's the same thing. I showed you with Ryse.
OMG, I don't know how many times I have to explain this. SSAA is an Anti-Aliasing method. Anti-Aliasing typically works to decrease jaggedness on edges. It, however, has no improvement on the display. Increase AA - regardless of if it's MSAA, FXAA, or SSAA - but keep the resolution at 480p, and the picture will be incredibly pixilated; HOWEVER, there will be no jagged edges and the game will still retain it's graphical fidelity.
Unlike most AA settings, which only provide a crisper image of the EDGES of 3d models, (some) SS settings provide a crisper and more detailed image of the edges AND the models themselves.
Display resolution IS NOT anti-aliasing. It doesn't matter if you run your game at 4K; the fact of the matter is that if you turn all your AA settings off, then you will have jagged 3d models with the staircase effect.
HOWEVER, AA is becoming less and less needed as graphic modeling improves. Graphic designers are progressively adding more and more vertex points to their modeling, making traditional AA like MSAA less and less needed. This is why BF 4 only has MSAA 4x, because it has high quality, smooth 3d models, while games like CS:GO need MSAA 8x, because their models don't have quite as many vertexes.
Which brings me to the point of RYSE that I probably didn't clarify earlier: If a game already has a lot of vertexes, than it needs to rely on ANY AA setting less. This means increasing the SS to a certain extant won't give you a much better picture.
Point is, display resolution isn't an AA setting, and is incapable of improving the quality and the smoothness of the 3d models THEMSELVES.
Irregardless of all of this, SSAA is only ONE WAY of improving graphical fidelity, and though to their credit the XB1X will be introducing it to consoles for the first time, it still lags behind in rendering speeds and texture details.
2. Digital Foundry with confirmation from MS. Article 1. Article 2. Showed you twice where supersampling for 1080p users is a system-level feature of the X1X. lol.
I KNOW! I CONCEDED THAT EARLIER! But obviously you can't read, so I'll repost my reply again.
2. OK, that's actually fantastic, but what level will the Supersampling be? 120% (or x1.44)? 150% (x2.25)? 200% (x4)? 283% (x8)? We need more details than "it will simply have supersampling" because I can tell you that if it's anything lower than x2.25 (or 150% res scale) than it won't make much of a difference. Plus, if it's incapable of playing games at 60 fps with this res scale, is it really worth it? Time will tell.
I then moved on to state how res scaling is only one of MANY ways to improve graphical fidelity. But again, you completely ignored that.
Already showed you where PS4 games used 3GB of 4.5GB available for VRAM. Sony increased this to 5.5GB for games, so VRAM usage will jump to ~4GB. X1X adds 3.5GB RAM for games, putting it into the 6-7GB VRAM range while still easily accommodating 1.5-3GB System RAM usage.
I see what you mean. It's the term "VRAM" - which is typically used for dedicated graphics cards (not shared RAM like in consoles) - that through me off.
Still, doesn't mean that it will be better than cards with less memory. You also have to take into account stuff like processing speed, which is more important (as I've repeatedly said, VRAM is not as important as architecture and speed, as multiple cards outperform cards with more VRam. GTX 1060 6GB vs the RX 580, the GTX 1060 3GB vs GTX 1050ti, GTX 1080ti vs Vega Frontier Edition; all of the former cards have less VRAM, ranging from a 1 gb difference to a 5 gb difference, yet all of the former cards trump the ladder, due to superior quality. And seeing as Consoles use AMD - which is notoriously known for bad architecture - that 6-7 gb of VRAM will more likely perform like 3-5 gb of a NVidia equivalent. It also doesn't help that it has a clock speed of 6.8 Ghz, which is freaking lower than the GTX 1060 3gb variant (9 Ghz).
It's large amount of GDDR RAM won't be able to compensate for the inferior AMD architecture or inferior clock speeds, the ladder of which being FAR more important when it comes to rendering graphics. It's why the GTX 1080ti, which has 5 less "VRAM" trumps the Vega Frontier, or the GTX 1060 with 2 less GB of VRAM trumps the RX 580. If you have all this room to store data, but you're not processing it quick enough to provide a smooth gameplay, then the amount of room you have is irrelevant. Then again, it seems that console players are fine with 30 fps with frequent drops in the 20s.
It's the same thing with CPUs. Console lemmings love raving about how many cores they have. "Ooooh, look, I have 8 cores". Yet, their 8 cores are being outdone by a 4 core I3. Why? Because Intel's superior architecture and faster speeds >>>>> AMD based shitty architecture. Why do you think they have so many cores? It's to compensate.
THAT is the reason why I've been saying that it will probably be a 1060 3gb level. Yeah, it has twice as much VRAM, but it has inferior architecture and is 1.2 GHz slower.
Log in to comment