[QUOTE="KingsMessenger"]
[QUOTE="i_am_interested"]
you know you didnt even answer my question right? all im asking for is realtime framerate performance on an actual rendered scene in motion and the resolution that scene was rendered at
i_am_interested
20 Million Pixels per second. Which ends up being like 21FPS @720p. That is roughly 50ms latency. By comparison, the first iterations of MLAA on the Cell were running at 120 ms latency(8.3 FPS). Optimized code could easily reach the level of being less than 30ms latency, which would be good enough for 30FPS. At that is on a Pentium 4. It is a highly parallel system which could theoretically do a lot if they were to scale onto things like a Core i7.
it doesnt work like that, your numbers for 50ms at 21fps and 30ms at about 30fps assume that the image is generated INSTANTLY in the gpu which then instantly sends it to a p4 who can spend all of 30ms (of 33 total) on it and then send it back to the gpu instantly who can get it ready to send it out instantly for a 30fps game, that reads something like"
gpu generates image = 1ms, gpu sends to cpu = 1ms, cpu performs mlaa = 30ms, cpu sends to gpu = 1 ms, gpu finishes and sends out = 1ms
my whole point is that people keep downplaying what santa monica was able to do by bringing up how mlaa was done on a p4 yet NO ONE can provide any evidence of it being performed on a realtime scene on a p4, REAL TIME
That is because the entire paper was a god damn tech demo... It was a proof of concept. Completely UNOPTIMIZED and not at all indicative of what the final performance could be. As far as tech demos go, the sort of latency they are getting implies that it is expensive, but that it is ABSOLUTELY possible to achieve in real-time.
Remember, the first implementations of MLAA on the Cell were FAR worse than that.
I am sorry if you can't wrap your head around how a paper like that is structured and what it implies, but the sort of performance they were getting was really impressive.
Log in to comment