Quite frankly I expect it will have more to do with "If we blur the screen, it helps hide imperfections like jaggies and cheap texture filtering. They are looking at a up-scaled 720p image across the room anyway, they won't notice a little more blur". Then they stick that same game on PC... with us looking that those exaggerated blur effects right in our face at 1080p. If they are going for a more film/CGI look, I want to know what film has a can machine glowing so brightly you think it's about to explode.
AnnoyedDragon
There's been decades of research into simulating lens-based phenomena and it's not because people just want to cover up a lack of texture filtering. I'm not sure why you think everyone is trying to pull the wool over your eyes here.
I was speaking in terms of the way us gamers think, not the way a developer thinks. You have to remember that this is a gaming forum, so I'm going to be talking in terms that us gamers understand.
You're the one talking about game development implementation details...you bought it all up and have been saying things as if you have knowledge about how they work. You can think whatever you want about how graphics work, but if you came here and said that magic elves make pixels in a tree factory then it doesn't make you any less wrong just because it's how you think as a gamer.
Textures paint everything on screen, so a poor quality texture is like a poor foundation that everything else is built on top of. A fuzzy looking texture, like say the signs in Crysis 2, isn't going to look less fuzzy by applying more shader effects to it. You have to replace the texture outright to improve its appearance. And if you can use shaders to improve its appearance like you say, surely that is going to be disproportionately performance intensive than just using a better texture?
Albedo textures just change the base color of a surface. They're good for a sign, because a sign is completely flat and has a uniform response to lighting. The only difference in real life would be the colors of the paint used (assuming each paint has the same glossiness), and so a simple way to represent that variation in surface color is with an albedo texture. You can also represent text and other decals with a distance field and do it with less memory, or you could do it with vertices. What would be best would depend on the hardware, the contents of the sign, and the artist workflow + time budget. Either way...most things in a game world are not signs. Consequently they need much much more than a high-res albedo map to make them look realistic in different environments and lighting conditions.
When you have 512-1GB of GDDR5 memory, I don't see the point in leaving that resource underutilized; and placing the workload on other areas like shaders.
Why not just use a better texture, negating the need for extra effects to make the texture look less bland?
Nobody is going to make the argument for "underutilizing" whatever memory you have available. My point is that albedo textures alone are not even close to being enough to give you a realistic approximation of what a surface looks like. And if you don't understand why shaders get hit so hard in modern games, then you probably haven't paid attention to the past 5-10 years of GPU hardware development. Shader ALU has been growing at a tremendously faster rate than memory or bandwidth.
When I see the incredible facial texture work in Crysis 1 in 2007, then a poor quality texture with a overuse of normal mapping to make up for the lack of fine detail, being used in 2011 with Crysis 2 when consoles got involved. It begs me to question whether the low memory environment on consoles pushed them to seek none texture heavy alternatives.
Normal maps are textures, so if you're pointing that out as an example of "non-texture heavy alternatives" then you're not making any sense. Artists tend to prefer normal maps over albedo maps because they give you actual surface variation that the lighting responds to. I'm sure their texture budget was lower for a game targeting hardware with less memory, but having normal maps are not an example of that.
I'm not quite sure where you got pre-baking from. My argument has been keeping the facial detail in a high resolution texture > a low resolution texture with mapping to add fine detail.
Well that's the thing...if your proposed approach is to only have a high-resolution albedo map then you're implicitly supporting "pre-baking" because that's the only way to get the proper lighting response since you don't have the requisite details stored in geometry or in a normal map.
Too much depth adding effects like normal mapping infamously leads to unrealistically plastic looking characters and environments, something that Unreal Engine 3 got a lot of criticism for back in the day
Normal mapping has nothing to do with something looking "plastic looking". The "plasticly" look is a result of simplified lighting models used in most games. Those Crysis faces you keep talking about had positively huge normal maps on them. The reason they looked good, is because they used high-quality skin shaders with sub-surface scattering.
Log in to comment