[QUOTE="GummiRaccoon"]
[QUOTE="04dcarraher"] "Direct to metal" does not mean anything for a gpu's processing abilities by its design limits. Also consoles still use API's like Pc's( ie direct x and opengl). And the overhead with Pc's using modern OS's like widows vista/7/8 is not bloated, as you say they are. Fact is that an old x1950 can run well coded games with same or better quality as the 360. Coding for a standardize piece of hardware can have its benefits being able to get the hardware to nearly use all its abilities and resources however there is still limits. Once you hit the wall where optimization can not allow any more wiggle room to use less resources to do the same job as before its all but compromises.
This console generation has shown us the the shift from SD to HD however now near the end of their life cycle many games have went back to below HD standards and barely keep the 30 fps standard to be able push certain graphics and or to have a set frame rate. The next set of consoles will not be that powerful and them only using low to medium ranged gpu's even with "Direct to metal" coding will not be able to bypass the fact that the gpu's aren't able to do full blown tessellation or more advanced shader workloads compared to gpu's that are multiple times faster. These upcoming console will start out with 1080 but will trickle back down to 720 to be able to produce certain graphical effects and or have a set frame rate as time goes on.
Kinthalis
explain then how carmack is wrong
http://www.pcper.com/reviews/Editorial/John-Carmack-Interview-GPU-Race-Intel-Graphics-Ray-Tracing-Voxels-and-more/Intervi
"I don't worry about the GPU hardware at all. I worry about the drivers a lot because there is a huge difference between what the hardware can do and what we can actually get out of it if we have to control it at a fine grain level. That's really been driven home by this past project by working at a very low level of the hardware on consoles and comparing that to these PCs that are true orders of magnitude more powerful than the PS3 or something, but struggle in many cases to keep up the same minimum latency. They have tons of bandwidth, they can render at many more multi-samples, multiple megapixels per screen, but to be able to go through the cycle and get feedback... fence here, update this here, and draw them there... it struggles to get that done in 16ms, and that is frustrating."
http://www.gamespy.com/articles/641/641662p2.html
You've alredy been schooled. But let me keep the ball rolling.
Carmack is mainly tlaking about dated API's on the PC - ie DX9, the most used API currently.
DX11 and newer version of openGL (combined with much better support for it on the driver side) do a MUCH better job at optimizing many of the rendering pathways. From runnign common lighting/shadowing code a LOT faster, to introducing multi-threaded rendering.
With DX1 and DX11.1 the difference between beign able to optomize at a very low level and having to reply on the PC API's is MUCH, MUCH smaller. We're tlaking a difference of 10 to 25%, and that's for the incredible optimization we'll see 4 years after the consoles hit the market,
I was talking to him. I got all my direct to hardware info from Carmack, so if Carmack is incorrect, I'd like to know.
Log in to comment