What is so special graphically about Crysis 2 (On consoles) and Rage?

This topic is locked from further discussion.

Avatar image for KingsMessenger
KingsMessenger

2574

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#451 KingsMessenger
Member since 2009 • 2574 Posts

[QUOTE="KingsMessenger"]

[QUOTE="i_am_interested"]

you know you didnt even answer my question right? all im asking for is realtime framerate performance on an actual rendered scene in motion and the resolution that scene was rendered at

i_am_interested

20 Million Pixels per second. Which ends up being like 21FPS @720p. That is roughly 50ms latency. By comparison, the first iterations of MLAA on the Cell were running at 120 ms latency(8.3 FPS). Optimized code could easily reach the level of being less than 30ms latency, which would be good enough for 30FPS. At that is on a Pentium 4. It is a highly parallel system which could theoretically do a lot if they were to scale onto things like a Core i7.

it doesnt work like that, your numbers for 50ms at 21fps and 30ms at about 30fps assume that the image is generated INSTANTLY in the gpu which then instantly sends it to a p4 who can spend all of 30ms (of 33 total) on it and then send it back to the gpu instantly who can get it ready to send it out instantly for a 30fps game, that reads something like"

gpu generates image = 1ms, gpu sends to cpu = 1ms, cpu performs mlaa = 30ms, cpu sends to gpu = 1 ms, gpu finishes and sends out = 1ms

my whole point is that people keep downplaying what santa monica was able to do by bringing up how mlaa was done on a p4 yet NO ONE can provide any evidence of it being performed on a realtime scene on a p4, REAL TIME

That is because the entire paper was a god damn tech demo... It was a proof of concept. Completely UNOPTIMIZED and not at all indicative of what the final performance could be. As far as tech demos go, the sort of latency they are getting implies that it is expensive, but that it is ABSOLUTELY possible to achieve in real-time.

Remember, the first implementations of MLAA on the Cell were FAR worse than that.

I am sorry if you can't wrap your head around how a paper like that is structured and what it implies, but the sort of performance they were getting was really impressive.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#452 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="KingsMessenger"]

[QUOTE="i_am_interested"]

you know you didnt even answer my question right? all im asking for is realtime framerate performance on an actual rendered scene in motion and the resolution that scene was rendered at

i_am_interested

20 Million Pixels per second. Which ends up being like 21FPS @720p. That is roughly 50ms latency. By comparison, the first iterations of MLAA on the Cell were running at 120 ms latency(8.3 FPS). Optimized code could easily reach the level of being less than 30ms latency, which would be good enough for 30FPS. At that is on a Pentium 4. It is a highly parallel system which could theoretically do a lot if they were to scale onto things like a Core i7.

it doesnt work like that, your numbers for 50ms at 21fps and 30ms at about 30fps assume that the image is generated INSTANTLY in the gpu which then instantly sends it to a p4 who can spend all of 30ms (of 33 total) on it and then send it back to the gpu instantly who can get it ready to send it out instantly for a 30fps game, that reads something like"

gpu generates image = 1ms, gpu sends to cpu = 1ms, cpu performs mlaa = 30ms, cpu sends to gpu = 1 ms, gpu finishes and sends out = 1ms

my whole point is that people keep downplaying what santa monica was able to do by bringing up how mlaa was done on a p4 yet NO ONE can provide any evidence of it being performed on a realtime scene on a p4, REAL TIME

In reference to http://www.eurogamer.net/articles/digitalfoundry-saboteur-aa-blog-entry andI quote.

"In the meantime, what we have is something that's new and genuinely exciting from a technical standpoint. We're seeing PS3 attacking a visual problem using a method that not even the most high-end GPUs are using."

Eurogamer didn't factor in AMD's http://developer.amd.com/gpu_assets/AA-HPG09.pdf

It was later corrected by Christer Ericson, director of tools and technology at Sony Santa Monica and I quote

"The screenshots may not be showing MLAA, and it's almost certainly not a technique as experimental as we thought it was, but it's certainly the case that this is the most impressive form of this type of anti-aliasing we've seen to date in a console game. Certainly, as we alluded to originally, the concept of using an edge-filter/blur combination isn't new, and continues to be refined. This document by Isshiki and Kunieda published in 1999 suggested a similar technique, and, more recently, AMD's Iourcha, Yang and Pomianowski suggested a more advanced version of the same basic idea".

AMD's Iourcha, Yang and Pomianowski's papers refers to http://developer.amd.com/gpu_assets/AA-HPG09.pdf

To quote AMD's paper "This filter is the basis for the Edge-Detect Custom Filter AA driver feature on ATI Radeon HD GPUs".

Eurogamer's "not even the most high-end GPU are using" assertion would be wrong. From top to bottom GPUs, current ATI GPUs supports Direct3D 10.1 and methods menstioned AMD's AA paper.

Avatar image for KingsMessenger
KingsMessenger

2574

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#453 KingsMessenger
Member since 2009 • 2574 Posts

[QUOTE="i_am_interested"]

[QUOTE="KingsMessenger"]

20 Million Pixels per second. Which ends up being like 21FPS @720p. That is roughly 50ms latency. By comparison, the first iterations of MLAA on the Cell were running at 120 ms latency(8.3 FPS). Optimized code could easily reach the level of being less than 30ms latency, which would be good enough for 30FPS. At that is on a Pentium 4. It is a highly parallel system which could theoretically do a lot if they were to scale onto things like a Core i7.

ronvalencia

it doesnt work like that, your numbers for 50ms at 21fps and 30ms at about 30fps assume that the image is generated INSTANTLY in the gpu which then instantly sends it to a p4 who can spend all of 30ms (of 33 total) on it and then send it back to the gpu instantly who can get it ready to send it out instantly for a 30fps game, that reads something like"

gpu generates image = 1ms, gpu sends to cpu = 1ms, cpu performs mlaa = 30ms, cpu sends to gpu = 1 ms, gpu finishes and sends out = 1ms

my whole point is that people keep downplaying what santa monica was able to do by bringing up how mlaa was done on a p4 yet NO ONE can provide any evidence of it being performed on a realtime scene on a p4, REAL TIME

In reference to http://www.eurogamer.net/articles/digitalfoundry-saboteur-aa-blog-entry andI quote.

"In the meantime, what we have is something that's new and genuinely exciting from a technical standpoint. We're seeing PS3 attacking a visual problem using a method that not even the most high-end GPUs are using."

Eurogamer didn't factor in AMD's http://developer.amd.com/gpu_assets/AA-HPG09.pdf

It was later corrected by Christer Ericson, director of tools and technology at Sony Santa Monica and I quote

"The screenshots may not be showing MLAA, and it's almost certainly not a technique as experimental as we thought it was, but it's certainly the case that this is the most impressive form of this type of anti-aliasing we've seen to date in a console game. Certainly, as we alluded to originally, the concept of using an edge-filter/blur combination isn't new, and continues to be refined. This document by Isshiki and Kunieda published in 1999 suggested a similar technique, and, more recently, AMD's Iourcha, Yang and Pomianowski suggested a more advanced version of the same basic idea".

AMD's Iourcha, Yang and Pomianowski's papers refers to http://developer.amd.com/gpu_assets/AA-HPG09.pdf

To quote AMD's paper "This filter is the basis for the Edge-Detect Custom Filter AA driver feature on ATI Radeon HD GPUs".

Eurogamer's "not even the most high-end GPU are using" assertion would be wrong. From top to bottom GPUs, current ATI GPUs supports Direct3D 10.1 and methods menstioned AMD's AA paper.

AMD's Edge Detect method is similar, but it is not the same. As one GoW3 designer said, "It comes down to a difference in how the detecting of edges in handled." MLAA is extremely high fidelity because they picked a very robust way to detect edges(essentially looking for any L-shaped pixel patterns that would result in aliasing). Other methods achieve similar results, but are not doing things the same way...

Avatar image for SheikhMuhammad
SheikhMuhammad

438

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#454 SheikhMuhammad
Member since 2005 • 438 Posts
What's so special about KZ3?SaltyMeatballs
superior graphics, 3D viewing and Move
Avatar image for i_am_interested
i_am_interested

1077

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#455 i_am_interested
Member since 2009 • 1077 Posts

[QUOTE="i_am_interested"]

[QUOTE="KingsMessenger"]

20 Million Pixels per second. Which ends up being like 21FPS @720p. That is roughly 50ms latency. By comparison, the first iterations of MLAA on the Cell were running at 120 ms latency(8.3 FPS). Optimized code could easily reach the level of being less than 30ms latency, which would be good enough for 30FPS. At that is on a Pentium 4. It is a highly parallel system which could theoretically do a lot if they were to scale onto things like a Core i7.

ronvalencia

it doesnt work like that, your numbers for 50ms at 21fps and 30ms at about 30fps assume that the image is generated INSTANTLY in the gpu which then instantly sends it to a p4 who can spend all of 30ms (of 33 total) on it and then send it back to the gpu instantly who can get it ready to send it out instantly for a 30fps game, that reads something like"

gpu generates image = 1ms, gpu sends to cpu = 1ms, cpu performs mlaa = 30ms, cpu sends to gpu = 1 ms, gpu finishes and sends out = 1ms

my whole point is that people keep downplaying what santa monica was able to do by bringing up how mlaa was done on a p4 yet NO ONE can provide any evidence of it being performed on a realtime scene on a p4, REAL TIME

In reference to http://www.eurogamer.net/articles/digitalfoundry-saboteur-aa-blog-entry andI quote.

"In the meantime, what we have is something that's new and genuinely exciting from a technical standpoint. We're seeing PS3 attacking a visual problem using a method that not even the most high-end GPUs are using."

Eurogamer didn't factor in AMD's http://developer.amd.com/gpu_assets/AA-HPG09.pdf

It was later corrected by Christer Ericson, director of tools and technology at Sony Santa Monica and I quote

"The screenshots may not be showing MLAA, and it's almost certainly not a technique as experimental as we thought it was, but it's certainly the case that this is the most impressive form of this type of anti-aliasing we've seen to date in a console game. Certainly, as we alluded to originally, the concept of using an edge-filter/blur combination isn't new, and continues to be refined. This document by Isshiki and Kunieda published in 1999 suggested a similar technique, and, more recently, AMD's Iourcha, Yang and Pomianowski suggested a more advanced version of the same basic idea".

AMD's Iourcha, Yang and Pomianowski's papers refers to http://developer.amd.com/gpu_assets/AA-HPG09.pdf

To quote AMD's paper "This filter is the basis for the Edge-Detect Custom Filter AA driver feature on ATI Radeon HD GPUs".

Eurogamer's "not even the most high-end GPU are using" assertion would be wrong. From top to bottom GPUs, current ATI GPUs supports Direct3D 10.1 and methods menstioned AMD's AA paper.



you just completely misread and misquoted that entire article, that comment isnt from christer ericcson, ITS FROM THE DIGITAL FOUNDRY AUTHOR, and its not even in regards to MLAA, its in regards to the saboteur's custom AA and had you read that article correctly, youd see that christer says the saboteur's method doesnt fall under MLAA, same goes for your posting of that amd link which doesnt even concern me at all, DF mentions a method not on high end gpus because its not on a mainstream consumer method thats being supplied by either amd or nvidia but thats a whole nother argument

all you keep posting are research papers to prove that the technique isnt new but thats not even the point, the point is that methods like these, moreso in gow3, have finally been performed in a published REAL TIME game

my point isnt that its a new technique, i keep asking for real time performance numbers and people keep avoiding my request with either obvious misquotes or bad performance numbers in both cases on the p4

are people finally going to stop downplaying what sony did or are people just going to keep mentioning the p4 that couldnt even do it in REAL TIME- that is unless someone can provide those real time numbers

Avatar image for mitu123
mitu123

155290

Forum Posts

0

Wiki Points

0

Followers

Reviews: 32

User Lists: 0

#456 mitu123
Member since 2006 • 155290 Posts

Both are open world(well, Crysis 2 isn't all open world but has open levels) compared to Killzone 3 and look just about on par with it, plus Rage is 60 FPS and has better textures, Killzone 3 only beats both games in animation though.

Also, this thread is awesome.