PS3 RSX vs 360 Xenos

  • 90 results
  • 1
  • 2

This topic is locked from further discussion.

Avatar image for Make_me_win
Make_me_win

93

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#51 Make_me_win
Member since 2006 • 93 Posts
I read an interview with CLiffyB and he also mentioned the 95-99% efficiency of Xenos. I think considering he made what is probably the best looking game out there he is a reliable source.
Avatar image for WhySoCry
WhySoCry

689

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#52 WhySoCry
Member since 2005 • 689 Posts

OMG...The Xenos USA allows it to run at 99% efficiency at all times, with USA there are no wasted pipelines...oh god, never mind, you just don't get it....Nagidar

Shut your mouth. You know absolutely nothing.

Avatar image for mrboo15
mrboo15

2043

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#53 mrboo15
Member since 2006 • 2043 Posts

OMG...The Xenos USA allows it to run at 99% efficiency at all times, with USA there are no wasted pipelines...oh god, never mind, you just don't get it....Nagidar

:lol: So you think PS3 developers watse RSX's shaders by not using them? :lol: @ you

Why would developers not use ALL of RSXs shaders?

Your funny :lol:

Avatar image for Make_me_win
Make_me_win

93

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#54 Make_me_win
Member since 2006 • 93 Posts
But, since the 360 and PS3 are so powerful I think that we willnot notice a difference. Like that dude said earlier, it all comes down to art style and the games.
Avatar image for lhughey
lhughey

4886

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#55 lhughey
Member since 2006 • 4886 Posts
We can bark out stats all day, but as they say, "the proof is in the pudding"
Avatar image for mismajor99
mismajor99

5676

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#56 mismajor99
Member since 2003 • 5676 Posts
They are both old.
Avatar image for Nagidar
Nagidar

6231

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#57 Nagidar
Member since 2006 • 6231 Posts

[QUOTE="Nagidar"]OMG...The Xenos USA allows it to run at 99% efficiency at all times, with USA there are no wasted pipelines...oh god, never mind, you just don't get it....mrboo15

:lol: So you think PS3 developers watse RSX's shaders by not using them? :lol: @ you

Why would developers not use ALL of RSXs shaders?

Your funny :lol:

Wow, you really have no idea what you're talking about....

Avatar image for Nagidar
Nagidar

6231

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#58 Nagidar
Member since 2006 • 6231 Posts

[QUOTE="Nagidar"]OMG...The Xenos USA allows it to run at 99% efficiency at all times, with USA there are no wasted pipelines...oh god, never mind, you just don't get it....WhySoCry

Shut your mouth. You know absolutely nothing.

Why? Because I know what I'm talking about? If I don't, please explain it to me than since you cows don't want to understand the truth.

Avatar image for Make_me_win
Make_me_win

93

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#59 Make_me_win
Member since 2006 • 93 Posts

[QUOTE="Nagidar"]OMG...The Xenos USA allows it to run at 99% efficiency at all times, with USA there are no wasted pipelines...oh god, never mind, you just don't get it....mrboo15

:lol: So you think PS3 developers watse RSX's shaders by not using them? :lol: @ you

Why would developers not use ALL of RSXs shaders?

Your funny :lol:

Because they can either do vertex or pixel operations. If the game is running a vertex heavy portion of the game 24 of them go idle, and if the game code is in a pixel heavy portion the other 8 go idle.

All 48 shaders on the 360 can do both.

Avatar image for Nagidar
Nagidar

6231

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#60 Nagidar
Member since 2006 • 6231 Posts
[QUOTE="mrboo15"]

[QUOTE="Nagidar"]OMG...The Xenos USA allows it to run at 99% efficiency at all times, with USA there are no wasted pipelines...oh god, never mind, you just don't get it....Make_me_win

:lol: So you think PS3 developers watse RSX's shaders by not using them? :lol: @ you

Why would developers not use ALL of RSXs shaders?

Your funny :lol:

Because they can either do vertex or pixel operations. If the game is running a vertex heavy portion of the game 24 of them go idle, and if the game code is in a pixel heavy portion the other 8 go idle.

All 48 shaders on the 360 can do both.

BINGO! But hush your mouth, they don't want to know the truth.
Avatar image for mismajor99
mismajor99

5676

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#61 mismajor99
Member since 2003 • 5676 Posts
This is funny reading console fanboy responses about outdated console GPU's, something that's completely alien to them to begin with. hehe.
Avatar image for Make_me_win
Make_me_win

93

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#62 Make_me_win
Member since 2006 • 93 Posts

This is funny reading console fanboy responses about outdated console GPU's, something that's completely alien to them to begin with. hehe.mismajor99

Its been 5 years since I upgraded my PC, ill give you that. Whats a good GPU right now?

Avatar image for Nagidar
Nagidar

6231

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#63 Nagidar
Member since 2006 • 6231 Posts

[QUOTE="mismajor99"]This is funny reading console fanboy responses about outdated console GPU's, something that's completely alien to them to begin with. hehe.Make_me_win

Its been 5 years since I upgraded my PC, ill give you that. Whats a good GPU right now?

8800 Ultra, the 2900 XT is good on paper, but AMD needs to update the drivers because they suck.

EDIT: Just checked AMD's web site and they updated their Catalyst drivers, DL'n them now to check them out.

Avatar image for Stonin
Stonin

3021

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#64 Stonin
Member since 2006 • 3021 Posts
[QUOTE="Make_me_win"]

[QUOTE="mismajor99"]This is funny reading console fanboy responses about outdated console GPU's, something that's completely alien to them to begin with. hehe.Nagidar

Its been 5 years since I upgraded my PC, ill give you that. Whats a good GPU right now?

8800 Ultra, the 2900 XT is good on paper, but AMD needs to update the drivers because they suck.

EDIT: Just checked AMD's web site and they updated their Catalyst drivers, DL'n them now to check them out.

Don't buy the Ultra! It is an overclocked 8800 GTX but at a much higher cost. Just buy the EVGA OC'd card or the BFG 8800 GTX OC2 - They run at the speed of the Ultra, cost a lot less, come with a lifetime warranty and won't make Nvidia pump out stupid tiny speed increases with premium price tags :).

Back on topic, there is so much ignorance in this thread it's not funny. Mrboo always thinks he knows what he's talking about but doesn't have a clue! Hey, Mrboo I guess I shouldn't have upgraded my GPU recently, just waited for some 'code' to bring it up to par with the 8800's right? :lol:

Avatar image for mismajor99
mismajor99

5676

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#65 mismajor99
Member since 2003 • 5676 Posts
[QUOTE="Make_me_win"]

[QUOTE="mismajor99"]This is funny reading console fanboy responses about outdated console GPU's, something that's completely alien to them to begin with. hehe.Nagidar

Its been 5 years since I upgraded my PC, ill give you that. Whats a good GPU right now?

8800 Ultra, the 2900 XT is good on paper, but AMD needs to update the drivers because they suck.

EDIT: Just checked AMD's web site and they updated their Catalyst drivers, DL'n them now to check them out.

Yes, and the 8900's and most likely the 2950's from ATI will be out very shortly, Holiday season which will obliterate the current 8800 and 2900 respectively. Good times for GPUs, especially as they get cheaper.

Avatar image for blacktorn
blacktorn

8299

Forum Posts

0

Wiki Points

0

Followers

Reviews: 7

User Lists: 0

#66 blacktorn
Member since 2004 • 8299 Posts
The Xenos is more powerful than the RSX,there are no if or buts,it is the more advanced graphics processor.
Avatar image for gamer4life85
gamer4life85

1203

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#67 gamer4life85
Member since 2003 • 1203 Posts
We should all know which system has the better GPU by now. 360 clearly has the better GPU so I don't get why we most debate this again.
Avatar image for muscleserge
muscleserge

3307

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#68 muscleserge
Member since 2005 • 3307 Posts
You forgot one MAJOR thing, efficiency the RSX runs at around 70% and the Xenos runs at 99% on top of being more powerfull, so no, they are not "about equal", not only is the Xenos more powerull, its more efficient.Nagidar
Xenos efficiency is somewhare at 90-95%, the only GPUs more efficient are G80s.
Avatar image for chigga102
chigga102

389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#69 chigga102
Member since 2005 • 389 Posts

We should all know which system has the better GPU by now. 360 clearly has the better GPU so I don't get why we most debate this again.gamer4life85

yes and 360 devs get the better tools.

Avatar image for CossackNoodle
CossackNoodle

232

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#70 CossackNoodle
Member since 2004 • 232 Posts
Well i must be one blind fanboy in denial since i do not believe its so much more powerful then the rsx. maybe in some areas but thats it they both have their advantages
Avatar image for Runningflame570
Runningflame570

10388

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#71 Runningflame570
Member since 2005 • 10388 Posts

I read an interview with CLiffyB and he also mentioned the 95-99% efficiency of Xenos. I think considering he made what is probably the best looking game out there he is a reliable source. Make_me_win

Best looking maybe, but not the most impressive on a technical level.

Avatar image for muscleserge
muscleserge

3307

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#72 muscleserge
Member since 2005 • 3307 Posts

Thanks to the efficiency of the 360 GPU's unified shader architecture and this 10MB of EDRAM the GPU is able to achieve 4XFSAA at no performance cost. ATI and Microsoft's goal was to eliminate memory bandwidth as a bottleneck and they seem to have succeeded. If there are any pc gamers out there they notice that when they turn on things such as AA or HDR the performance goes down that's because those features eat bandwidth hence the efficiency of the GPU's operation decreases as they are turned on. With the 360 HDR+4XAA simultaneously are like nothing to the GPU with proper use of the EDRAM. The EDRAM contains a 3D logic unit which has 192 Floating Point Unit processors inside. The logic unit will be able to exchange data with the 10MB of RAM at 2 Terabits a second. Things such as antialiasing, computing z depths or occlusion culling can happen on the EDRAM without impacting the GPU's workload.

Xenos writes to this EDRAM for its framebuffer and it's connected to it via a 32GB/sec connection (this number is extremely close to the theoretical because the EDRAM is right there on the 360 GPU's daughter die.) Don't forget the EDRAM has a bandwidth of 256GB/s and its only by dividing this 256GB/s by the initial 32GB/s that we get from the connection of Xenos to the EDRAM we find out that Xenos is capable of multiplying its effective bandwidth to the frame buffer by a factor of 8 when processing pixels that make use of the EDRAM, which includes HDR or AA and other things. This leads to a maximum of 32*8=256GB/s which, to say the least, is a very effective way of dealing with bandwidth intensive tasks.

In order for this to be possible developers would need to setup their rendering engine to take advantage of both the EDRAM and the available onboard 3D logic. If anyone is confused why the 32GB/s is being multiplied by 8 its because once data travels over the 32GB/s bus it is able to be processed 8 times by the EDRAM logic to the EDRAM memory at a rate of 256GB/s so for every 32GB/s you send over 256GB/s gets processed. This results in RSX being at a bandwidth disadvantage in comparison to Xenos. Needless to say the 360 not only has an overabundance of video memory bandwidth, but it also has amazing memory saving features. For example to get 720P with 4XFSAA on traditional architecture would require 28MB worth of memory. On the 360 only 16MB is required. There are also features in the 360's Direct3D API where developers are able to fit 2 128x128 textures into the same space required for one, for example. So even with all the memory and all the memory bandwidth, they are still very mindful of how it's used.

I wasn't too clear earlier on the difference between the RSX's dedicated pixel and vertex shader pipelines compared to the 360s unified shader architecture. The 360 GPU has 48 unified pipelines capable of accepting either pixel or vertex shader operations whereas with the older dedicated pixel and vertex pipeline architecture that RSX uses when you are in a vertex heavy situation most of the 24 pixel pipes go idle instead of helping out with vertex work.

Or on the flip side in a pixel heavy situation those 8 vertex shader pipelines are just idle and don't help out the pixel pipes (because they aren't able to), but with the 360's unified architecture in a vertex heavy situation for example none of the pipes go idle. All 48 unified pipelines are capable of helping with either pixel or vertex shader operations when needed so as a result efficiency is greatly improved and so is overall performance. When pipelines are forced to go idle because they lack the capability to help another set of pipelines accomplish their task it's detrimental to performance. This inefficient manner is how all current GPUs operate including the PS3's RSX. The pipelines go idle because the pixel pipes aren't able to help the vertex pipes accomplish a task or vice versa.Whats even more impressive about this GPU is it by itself determines the balance of how many pipelines to dedicate to vertex or pixel shader operations at any given time a programmer is NOT needed to handle any of this the GPU takes care of all this itself in the quickest most efficient way possible.1080p is not a smart resolution to target in any form this generation, but if 360 developers wanted to get serious about 1080p, thanks to Xenos, could actually outperform the ps3 in 1080p. (The less efficient GPU always shows its weaknesses against the competition in higher resolutions so the best way for the rsx to be competitive is to stick to 720P) In vertex shader limited situations the 360's gpu will literally be 6 times faster than RSX. With a unified shader architecture things are much more efficient than previous architectures allowed (which is extremely important). The 360's GPU for example is 95-99% efficient with 4XAA enabled. With traditional architecture there are design related roadblocks that prevent such efficiency. To avoid such roadblocks, which held back previous hardware, the 360 GPU design team created a complex system of hardware threading inside the chip itself. In this case, each thread is a program associated with the shader arrays. The Xbox 360 GPU can manage and maintain state information on 64 separate threads in hardware. There's a thread buffer inside the chip, and the GPU can switch between threads instantaneously in order to keep the shader arrays busy at all times.

Want to know why Xenos doesn't need as much raw horsepower to outperform say something like the x1900xtx or the 7900GTX? It makes up for not having as much raw horsepower by actually being efficient enough to fully achieve its advertised performance numbers which is an impressive feat. The x1900xtx has a peak pixel fillrate of 10.4Gigasamples a second while the 7900GTX has a peak pixel fillrate of 15.6Gigasamples a second. Neither of them is actually able to achieve and sustain those peak fillrate performance numbers though due to not being efficient enough, but they get away with it in this case since they can also bank on all the raw power. The performance winner between the 7900GTX and the X1900XTX is actually the X1900XTX despite a lower pixel fillrate (especially in higher resolutions) because it has twice as many pixel pipes and is the more efficient of the 2. It's just a testament as to how important efficiency is. Well how exactly can the mere 360 GPU stand up to both of those with only a 128 bit memory interface and 500MHZ? Well the 360 GPU with 4XFSAA enabled achieves AND sustains its peak fillrate of 16Gigasamples per second which is achieved by the combination of the unified shader architecture and the excessive amount of bandwidth which gives it the type of efficiency that allows it to outperform GPUs with far more raw horsepower. I guess it also helps that it's the single most advanced GPU currently available anyway for purchase. Things get even better when you factor in the Xenos' MEMEXPORT ability which allows it to enable "streamout" which opens the door for Xenos to achieve DX10 class functionality. A shame Microsoft chose to disable Xenos' other 16 pipelines to improve yields and keep costs down. Not many are even aware that the 360's GPU has the exact same number of pipelines as ATI's unreleased R600, but to keep costs down and to make the GPU easier to manufacture, Microsoft chose to disable one of the shader arrays containing 16 pipelines. What MEMEXPORT does is it expands the graphics pipeline in more general purpose and programmable manner.

Architecture > RAW POWA!

Make_me_win
Xenos can only do 4xAA at 720p, if it goes above then 10mb of EDRAM isn't enough. Besides even Gears didn't have AA, so obviously most devs don't use the EDRAM for AA. Xenos will also have problems rendering above 720p, with its 128bit mem bus, and 22gb/s bandwidth. My 7900GT has 50gb/s 256bit bus, and its already starting to show some weaknesses at 1050p, especially with HDR. As for the 7900GTX vs x1900xtx, the x1900XTX has more shader power, so it wins in some games, and the 7900GTX has more texture units and ALU so it wins in other games. To say that on eis more powerful than anouther is foolish.
Avatar image for Runningflame570
Runningflame570

10388

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#73 Runningflame570
Member since 2005 • 10388 Posts

8800 Ultra, the 2900 XT is good on paper, but AMD needs to update the drivers because they suck.

Nagidar

...Nobody in their right mind is buying the 8800 Ultra, in fact most people aren't even buying the 8800GTX, they are buying the 8800GTS 320MB in most cases with new DX10 cards.

Also no, the 2900XT's drivers are definitely better than the 8800 drivers were for the first few months.

Meanwhile high-end DX9 is dominated by 7900GS and x1950 PRO with budget cards going to the 7600GT in most cases.

Avatar image for chigga102
chigga102

389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#74 chigga102
Member since 2005 • 389 Posts
[QUOTE="Make_me_win"]

Thanks to the efficiency of the 360 GPU's unified shader architecture and this 10MB of EDRAM the GPU is able to achieve 4XFSAA at no performance cost. ATI and Microsoft's goal was to eliminate memory bandwidth as a bottleneck and they seem to have succeeded. If there are any pc gamers out there they notice that when they turn on things such as AA or HDR the performance goes down that's because those features eat bandwidth hence the efficiency of the GPU's operation decreases as they are turned on. With the 360 HDR+4XAA simultaneously are like nothing to the GPU with proper use of the EDRAM. The EDRAM contains a 3D logic unit which has 192 Floating Point Unit processors inside. The logic unit will be able to exchange data with the 10MB of RAM at 2 Terabits a second. Things such as antialiasing, computing z depths or occlusion culling can happen on the EDRAM without impacting the GPU's workload.

Xenos writes to this EDRAM for its framebuffer and it's connected to it via a 32GB/sec connection (this number is extremely close to the theoretical because the EDRAM is right there on the 360 GPU's daughter die.) Don't forget the EDRAM has a bandwidth of 256GB/s and its only by dividing this 256GB/s by the initial 32GB/s that we get from the connection of Xenos to the EDRAM we find out that Xenos is capable of multiplying its effective bandwidth to the frame buffer by a factor of 8 when processing pixels that make use of the EDRAM, which includes HDR or AA and other things. This leads to a maximum of 32*8=256GB/s which, to say the least, is a very effective way of dealing with bandwidth intensive tasks.

In order for this to be possible developers would need to setup their rendering engine to take advantage of both the EDRAM and the available onboard 3D logic. If anyone is confused why the 32GB/s is being multiplied by 8 its because once data travels over the 32GB/s bus it is able to be processed 8 times by the EDRAM logic to the EDRAM memory at a rate of 256GB/s so for every 32GB/s you send over 256GB/s gets processed. This results in RSX being at a bandwidth disadvantage in comparison to Xenos. Needless to say the 360 not only has an overabundance of video memory bandwidth, but it also has amazing memory saving features. For example to get 720P with 4XFSAA on traditional architecture would require 28MB worth of memory. On the 360 only 16MB is required. There are also features in the 360's Direct3D API where developers are able to fit 2 128x128 textures into the same space required for one, for example. So even with all the memory and all the memory bandwidth, they are still very mindful of how it's used.

I wasn't too clear earlier on the difference between the RSX's dedicated pixel and vertex shader pipelines compared to the 360s unified shader architecture. The 360 GPU has 48 unified pipelines capable of accepting either pixel or vertex shader operations whereas with the older dedicated pixel and vertex pipeline architecture that RSX uses when you are in a vertex heavy situation most of the 24 pixel pipes go idle instead of helping out with vertex work.

Or on the flip side in a pixel heavy situation those 8 vertex shader pipelines are just idle and don't help out the pixel pipes (because they aren't able to), but with the 360's unified architecture in a vertex heavy situation for example none of the pipes go idle. All 48 unified pipelines are capable of helping with either pixel or vertex shader operations when needed so as a result efficiency is greatly improved and so is overall performance. When pipelines are forced to go idle because they lack the capability to help another set of pipelines accomplish their task it's detrimental to performance. This inefficient manner is how all current GPUs operate including the PS3's RSX. The pipelines go idle because the pixel pipes aren't able to help the vertex pipes accomplish a task or vice versa.Whats even more impressive about this GPU is it by itself determines the balance of how many pipelines to dedicate to vertex or pixel shader operations at any given time a programmer is NOT needed to handle any of this the GPU takes care of all this itself in the quickest most efficient way possible.1080p is not a smart resolution to target in any form this generation, but if 360 developers wanted to get serious about 1080p, thanks to Xenos, could actually outperform the ps3 in 1080p. (The less efficient GPU always shows its weaknesses against the competition in higher resolutions so the best way for the rsx to be competitive is to stick to 720P) In vertex shader limited situations the 360's gpu will literally be 6 times faster than RSX. With a unified shader architecture things are much more efficient than previous architectures allowed (which is extremely important). The 360's GPU for example is 95-99% efficient with 4XAA enabled. With traditional architecture there are design related roadblocks that prevent such efficiency. To avoid such roadblocks, which held back previous hardware, the 360 GPU design team created a complex system of hardware threading inside the chip itself. In this case, each thread is a program associated with the shader arrays. The Xbox 360 GPU can manage and maintain state information on 64 separate threads in hardware. There's a thread buffer inside the chip, and the GPU can switch between threads instantaneously in order to keep the shader arrays busy at all times.

Want to know why Xenos doesn't need as much raw horsepower to outperform say something like the x1900xtx or the 7900GTX? It makes up for not having as much raw horsepower by actually being efficient enough to fully achieve its advertised performance numbers which is an impressive feat. The x1900xtx has a peak pixel fillrate of 10.4Gigasamples a second while the 7900GTX has a peak pixel fillrate of 15.6Gigasamples a second. Neither of them is actually able to achieve and sustain those peak fillrate performance numbers though due to not being efficient enough, but they get away with it in this case since they can also bank on all the raw power. The performance winner between the 7900GTX and the X1900XTX is actually the X1900XTX despite a lower pixel fillrate (especially in higher resolutions) because it has twice as many pixel pipes and is the more efficient of the 2. It's just a testament as to how important efficiency is. Well how exactly can the mere 360 GPU stand up to both of those with only a 128 bit memory interface and 500MHZ? Well the 360 GPU with 4XFSAA enabled achieves AND sustains its peak fillrate of 16Gigasamples per second which is achieved by the combination of the unified shader architecture and the excessive amount of bandwidth which gives it the type of efficiency that allows it to outperform GPUs with far more raw horsepower. I guess it also helps that it's the single most advanced GPU currently available anyway for purchase. Things get even better when you factor in the Xenos' MEMEXPORT ability which allows it to enable "streamout" which opens the door for Xenos to achieve DX10 class functionality. A shame Microsoft chose to disable Xenos' other 16 pipelines to improve yields and keep costs down. Not many are even aware that the 360's GPU has the exact same number of pipelines as ATI's unreleased R600, but to keep costs down and to make the GPU easier to manufacture, Microsoft chose to disable one of the shader arrays containing 16 pipelines. What MEMEXPORT does is it expands the graphics pipeline in more general purpose and programmable manner.

Architecture > RAW POWA!

muscleserge

Xenos can only do 4xAA at 720p, if it goes above then 10mb of EDRAM isn't enough. Besides even Gears didn't have AA, so obviously most devs don't use the EDRAM for AA. Xenos will also have problems rendering above 720p, with its 128bit mem bus, and 22gb/s bandwidth. My 7900GT has 50gb/s 256bit bus, and its already starting to show some weaknesses at 1050p, especially with HDR. As for the 7900GTX vs x1900xtx, the x1900XTX has more shader power, so it wins in some games, and the 7900GTX has more texture units and ALU so it wins in other games. To say that on eis more powerful than anouther is foolish.

so which would u get the 7900gs or the x1950pro ?

Avatar image for d3thm0nkey
d3thm0nkey

615

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#75 d3thm0nkey
Member since 2006 • 615 Posts

Architecture wise 360s Xenos GPU is a generation ahead of PS3s RSX. But in terms of actualy performance there about equal. The most important thing to look at when comparing GPUs are there "features" in other words what graphical efects can 1 GPU do over another.

In PC terms this is very easy, compare the features of a DirextX8 level GPU to a DirextX9 level GPU. The difference betwen the 2 is staggering. DirectX8 cant do HDR, Parrallex mapping, soft shadows, it also has weaker shaders meaning the effects are low quality. Now from looking at RSX and Xenos there are NO effects that 1 can do over another, RSX cant do any magic voodoo effects that Xenos cant do and vica versa.

The only thing that both GPUs differ on is there API, RSX runs on a super tweaked PS3 version of OpenGL-ES 2.0 and 360 runs on slightly tweaked DirextX9. This is were the GPUs start to differ, OpenGL has always given developers better access to a GPUs features and functions then DirectX ( Hence why most movie studios render programs are OpenGL based ) dirextX doesnt allow developers to access ALL of a GPU features and functions as is has to be able to accomodate thousands of PC combinations. This is a big problem on the PC platform as it meens that some ultra high end GPU is'nt useing all of its feature set, in other words that GPU is being WASTED. But being closed platforms meens that developers can "trick" the API into allowing the hardware to do effects that the GPU supports that the API does'nt, thats why we seen the original X-Box do amazing things with its Geforce 3 based GPU. X-Box developers tricked the API and unlocked most of the Geforce 3s feature set. OpenGL on the other hand give acces to ALL of the features of a GPU. We all know that microsoft updated 360s API and they call it DX9.5, that update just unlocked more of the GPUs effects that normal DX9 would'nt allow the developers to access. Its no DX10 but its DX9 either.

So how can PS3 or 360 produce effects that 1 machine could'nt do? the answear? Do it in SOFTWARE on the CPU. Thats were Cell comes in, Cell can add effects to RSX's feature list that could'nt be done on 360 because

1. It has'nt got enough spare CPU cycles to do the effect

2. Its CPU just cant do it AT ALL

360 has a hardware fucntion called MEMEXPORT, its simular to PS3s set-up as it allows 360s GPU to "take" a whole core off the game and use it for extra graphics processing, but this set-up while helpful it has'nt got anywere near the flexibility as PS3s Cell+RSX combo.

So in the end it will all boil down to how much Cell can actually do?, how many extra sepcial effects features can Cell add to RSX? That is what will make the difference between the 2 consoles, but it wont happen ovenight, it wil take YEARS befor we start to see if Cell really can add that extra sparkle to PS3 games. And in my opinion, judgeing by what we have seen Cell do already in Motorstorm, resistance, Lair, Warhawk, MGS4 id say Cell and PS3 graphical will be awsome.

mrboo15
With respect your wrong about OpenGL vs. DirectX. The reason studios use OpenGL sometimes is because they use Linux boxes clustered to render videos. This does not mean it is superior. People use Linux because it is free. And OpenGL is just part of the FREE strategy. But free is rarely better. Take a look. SQL Express >>>>>>>>> MySql (no stored procs) IIS >> Apache (no multithreading, have to use 3rd party support for com) Windows XP >> Linux (for gaming, Windows supports DX and OpenGL. DX on linux is sketchy) DX > OpenGL (OpenGL is NOT as feature rich as DX10 sorry your misinformed) MS API's >> OpenSource API's (always easier to use and have documentation) I am not an MS fanboy. But I don't hate on them either. Nor Sony for that matter.
Avatar image for Runningflame570
Runningflame570

10388

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#76 Runningflame570
Member since 2005 • 10388 Posts

so which would u get the 7900gs or the x1950pro ?

chigga102

That depends solely on your budget and power supply's capabilities. The x1950 PRO edges out the 7900GS somewhat but it also requires more wattage and amperage than the 7900GS and is a bit more expensive to buy.

Avatar image for chigga102
chigga102

389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#77 chigga102
Member since 2005 • 389 Posts
[QUOTE="chigga102"]

so which would u get the 7900gs or the x1950pro ?

Runningflame570

That depends solely on your budget and power supply's capabilities. The x1950 PRO edges out the 7900GS somewhat but it also requires more wattage and amperage than the 7900GS and is a bit more expensive to buy.

actually on newegg i found x1950pro to be cheaper. at 126.99 and cheapest 7900gs is 139.99 so i just wanna play oblivion

Avatar image for Nagidar
Nagidar

6231

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#78 Nagidar
Member since 2006 • 6231 Posts
[QUOTE="chigga102"]

so which would u get the 7900gs or the x1950pro ?

Runningflame570

That depends solely on your budget and power supply's capabilities. The x1950 PRO edges out the 7900GS somewhat but it also requires more wattage and amperage than the 7900GS and is a bit more expensive to buy.

Agreed, I have the 1950 Pro on one of my other computers and its definatley a good budget card, its Crossfire actually edges out most other dual card setups.

Avatar image for Runningflame570
Runningflame570

10388

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#79 Runningflame570
Member since 2005 • 10388 Posts

actually on newegg i found x1950pro to be cheaper. at 126.99 and cheapest 7900gs is 139.99 so i just wanna play oblivion

chigga102

Well thats fairly unusual and isn't something I've seen before..I got my 7900GS (with Linux NVIDIA is really the only option) for around $115 after rebate. Still, if you just want to play oblivion even a 7600GT should do it at lower settings, but if you want higher settings yes you'll need a stronger card.

In that case, I would say check the amperage and wattage on your PSU, if its under 30amps and say 400-450watts go 7900GS, if its over go X1950 PRO.

Avatar image for Kevanio
Kevanio

580

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#80 Kevanio
Member since 2006 • 580 Posts
[QUOTE="Runningflame570"][QUOTE="chigga102"]

so which would u get the 7900gs or the x1950pro ?

chigga102

That depends solely on your budget and power supply's capabilities. The x1950 PRO edges out the 7900GS somewhat but it also requires more wattage and amperage than the 7900GS and is a bit more expensive to buy.

actually on newegg i found x1950pro to be cheaper. at 126.99 and cheapest 7900gs is 139.99 so i just wanna play oblivion

anyone can play oblivion...ever heard of...oldblivion? ;) haha

Avatar image for muscleserge
muscleserge

3307

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#81 muscleserge
Member since 2005 • 3307 Posts
[QUOTE="muscleserge"][QUOTE="Make_me_win"]

Thanks to the efficiency of the 360 GPU's unified shader architecture and this 10MB of EDRAM the GPU is able to achieve 4XFSAA at no performance cost. ATI and Microsoft's goal was to eliminate memory bandwidth as a bottleneck and they seem to have succeeded. If there are any pc gamers out there they notice that when they turn on things such as AA or HDR the performance goes down that's because those features eat bandwidth hence the efficiency of the GPU's operation decreases as they are turned on. With the 360 HDR+4XAA simultaneously are like nothing to the GPU with proper use of the EDRAM. The EDRAM contains a 3D logic unit which has 192 Floating Point Unit processors inside. The logic unit will be able to exchange data with the 10MB of RAM at 2 Terabits a second. Things such as antialiasing, computing z depths or occlusion culling can happen on the EDRAM without impacting the GPU's workload.

Xenos writes to this EDRAM for its framebuffer and it's connected to it via a 32GB/sec connection (this number is extremely close to the theoretical because the EDRAM is right there on the 360 GPU's daughter die.) Don't forget the EDRAM has a bandwidth of 256GB/s and its only by dividing this 256GB/s by the initial 32GB/s that we get from the connection of Xenos to the EDRAM we find out that Xenos is capable of multiplying its effective bandwidth to the frame buffer by a factor of 8 when processing pixels that make use of the EDRAM, which includes HDR or AA and other things. This leads to a maximum of 32*8=256GB/s which, to say the least, is a very effective way of dealing with bandwidth intensive tasks.

In order for this to be possible developers would need to setup their rendering engine to take advantage of both the EDRAM and the available onboard 3D logic. If anyone is confused why the 32GB/s is being multiplied by 8 its because once data travels over the 32GB/s bus it is able to be processed 8 times by the EDRAM logic to the EDRAM memory at a rate of 256GB/s so for every 32GB/s you send over 256GB/s gets processed. This results in RSX being at a bandwidth disadvantage in comparison to Xenos. Needless to say the 360 not only has an overabundance of video memory bandwidth, but it also has amazing memory saving features. For example to get 720P with 4XFSAA on traditional architecture would require 28MB worth of memory. On the 360 only 16MB is required. There are also features in the 360's Direct3D API where developers are able to fit 2 128x128 textures into the same space required for one, for example. So even with all the memory and all the memory bandwidth, they are still very mindful of how it's used.

I wasn't too clear earlier on the difference between the RSX's dedicated pixel and vertex shader pipelines compared to the 360s unified shader architecture. The 360 GPU has 48 unified pipelines capable of accepting either pixel or vertex shader operations whereas with the older dedicated pixel and vertex pipeline architecture that RSX uses when you are in a vertex heavy situation most of the 24 pixel pipes go idle instead of helping out with vertex work.

Or on the flip side in a pixel heavy situation those 8 vertex shader pipelines are just idle and don't help out the pixel pipes (because they aren't able to), but with the 360's unified architecture in a vertex heavy situation for example none of the pipes go idle. All 48 unified pipelines are capable of helping with either pixel or vertex shader operations when needed so as a result efficiency is greatly improved and so is overall performance. When pipelines are forced to go idle because they lack the capability to help another set of pipelines accomplish their task it's detrimental to performance. This inefficient manner is how all current GPUs operate including the PS3's RSX. The pipelines go idle because the pixel pipes aren't able to help the vertex pipes accomplish a task or vice versa.Whats even more impressive about this GPU is it by itself determines the balance of how many pipelines to dedicate to vertex or pixel shader operations at any given time a programmer is NOT needed to handle any of this the GPU takes care of all this itself in the quickest most efficient way possible.1080p is not a smart resolution to target in any form this generation, but if 360 developers wanted to get serious about 1080p, thanks to Xenos, could actually outperform the ps3 in 1080p. (The less efficient GPU always shows its weaknesses against the competition in higher resolutions so the best way for the rsx to be competitive is to stick to 720P) In vertex shader limited situations the 360's gpu will literally be 6 times faster than RSX. With a unified shader architecture things are much more efficient than previous architectures allowed (which is extremely important). The 360's GPU for example is 95-99% efficient with 4XAA enabled. With traditional architecture there are design related roadblocks that prevent such efficiency. To avoid such roadblocks, which held back previous hardware, the 360 GPU design team created a complex system of hardware threading inside the chip itself. In this case, each thread is a program associated with the shader arrays. The Xbox 360 GPU can manage and maintain state information on 64 separate threads in hardware. There's a thread buffer inside the chip, and the GPU can switch between threads instantaneously in order to keep the shader arrays busy at all times.

Want to know why Xenos doesn't need as much raw horsepower to outperform say something like the x1900xtx or the 7900GTX? It makes up for not having as much raw horsepower by actually being efficient enough to fully achieve its advertised performance numbers which is an impressive feat. The x1900xtx has a peak pixel fillrate of 10.4Gigasamples a second while the 7900GTX has a peak pixel fillrate of 15.6Gigasamples a second. Neither of them is actually able to achieve and sustain those peak fillrate performance numbers though due to not being efficient enough, but they get away with it in this case since they can also bank on all the raw power. The performance winner between the 7900GTX and the X1900XTX is actually the X1900XTX despite a lower pixel fillrate (especially in higher resolutions) because it has twice as many pixel pipes and is the more efficient of the 2. It's just a testament as to how important efficiency is. Well how exactly can the mere 360 GPU stand up to both of those with only a 128 bit memory interface and 500MHZ? Well the 360 GPU with 4XFSAA enabled achieves AND sustains its peak fillrate of 16Gigasamples per second which is achieved by the combination of the unified shader architecture and the excessive amount of bandwidth which gives it the type of efficiency that allows it to outperform GPUs with far more raw horsepower. I guess it also helps that it's the single most advanced GPU currently available anyway for purchase. Things get even better when you factor in the Xenos' MEMEXPORT ability which allows it to enable "streamout" which opens the door for Xenos to achieve DX10 class functionality. A shame Microsoft chose to disable Xenos' other 16 pipelines to improve yields and keep costs down. Not many are even aware that the 360's GPU has the exact same number of pipelines as ATI's unreleased R600, but to keep costs down and to make the GPU easier to manufacture, Microsoft chose to disable one of the shader arrays containing 16 pipelines. What MEMEXPORT does is it expands the graphics pipeline in more general purpose and programmable manner.

Architecture > RAW POWA!

chigga102

Xenos can only do 4xAA at 720p, if it goes above then 10mb of EDRAM isn't enough. Besides even Gears didn't have AA, so obviously most devs don't use the EDRAM for AA. Xenos will also have problems rendering above 720p, with its 128bit mem bus, and 22gb/s bandwidth. My 7900GT has 50gb/s 256bit bus, and its already starting to show some weaknesses at 1050p, especially with HDR. As for the 7900GTX vs x1900xtx, the x1900XTX has more shader power, so it wins in some games, and the 7900GTX has more texture units and ALU so it wins in other games. To say that on eis more powerful than anouther is foolish.

so which would u get the 7900gs or the x1950pro ?

Me personally, I would get the 7900GS, but the 512mb version. The 7900GS has a lot of OC head room on stock volts, I believe some people run them at 600mhz core, which is pretty good. And there is always a volt mod that will allow clocks of 700+mhz, with some decent cooling ofcorse.
Avatar image for muscleserge
muscleserge

3307

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 0

#82 muscleserge
Member since 2005 • 3307 Posts
and one more thing, an OCed 7900GS will play oblivion nicely on very high settings, especially the 512mb one. It will also play Crysis better that the 1950pro simply because Crysis is an Nvidia sponsored game.
Avatar image for 210189677155857843583653671808
210189677155857843583653671808

748

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#83 210189677155857843583653671808
Member since 2006 • 748 Posts

Architecture wise 360s Xenos GPU is a generation ahead of PS3s RSX. But in terms of actualy performance there about equal. The most important thing to look at when comparing GPUs are there "features" in other words what graphical efects can 1 GPU do over another.

In PC terms this is very easy, compare the features of a DirextX8 level GPU to a DirextX9 level GPU. The difference betwen the 2 is staggering. DirectX8 cant do HDR, Parrallex mapping, soft shadows, it also has weaker shaders meaning the effects are low quality. Now from looking at RSX and Xenos there are NO effects that 1 can do over another, RSX cant do any magic voodoo effects that Xenos cant do and vica versa.

The only thing that both GPUs differ on is there API, RSX runs on a super tweaked PS3 version of OpenGL-ES 2.0 and 360 runs on slightly tweaked DirextX9. This is were the GPUs start to differ, OpenGL has always given developers better access to a GPUs features and functions then DirectX ( Hence why most movie studios render programs are OpenGL based ) dirextX doesnt allow developers to access ALL of a GPU features and functions as is has to be able to accomodate thousands of PC combinations. This is a big problem on the PC platform as it meens that some ultra high end GPU is'nt useing all of its feature set, in other words that GPU is being WASTED. But being closed platforms meens that developers can "trick" the API into allowing the hardware to do effects that the GPU supports that the API does'nt, thats why we seen the original X-Box do amazing things with its Geforce 3 based GPU. X-Box developers tricked the API and unlocked most of the Geforce 3s feature set. OpenGL on the other hand give acces to ALL of the features of a GPU. We all know that microsoft updated 360s API and they call it DX9.5, that update just unlocked more of the GPUs effects that normal DX9 would'nt allow the developers to access. Its no DX10 but its DX9 either.

So how can PS3 or 360 produce effects that 1 machine could'nt do? the answear? Do it in SOFTWARE on the CPU. Thats were Cell comes in, Cell can add effects to RSX's feature list that could'nt be done on 360 because

1. It has'nt got enough spare CPU cycles to do the effect

2. Its CPU just cant do it AT ALL

360 has a hardware fucntion called MEMEXPORT, its simular to PS3s set-up as it allows 360s GPU to "take" a whole core off the game and use it for extra graphics processing, but this set-up while helpful it has'nt got anywere near the flexibility as PS3s Cell+RSX combo.

So in the end it will all boil down to how much Cell can actually do?, how many extra sepcial effects features can Cell add to RSX? That is what will make the difference between the 2 consoles, but it wont happen ovenight, it wil take YEARS befor we start to see if Cell really can add that extra sparkle to PS3 games. And in my opinion, judgeing by what we have seen Cell do already in Motorstorm, resistance, Lair, Warhawk, MGS4 id say Cell and PS3 graphical will be awsome.

mrboo15

nice, mate its good to you spent the time researching the cards before you started spouting. there are afew threads in this forum that are basically biased from the off, going on about how the 360 is far more superior. without knowing any facts about theGPUs

Avatar image for 210189677155857843583653671808
210189677155857843583653671808

748

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#84 210189677155857843583653671808
Member since 2006 • 748 Posts

Architecture wise 360s Xenos GPU is a generation ahead of PS3s RSX. But in terms of actualy performance there about equal. The most important thing to look at when comparing GPUs are there "features" in other words what graphical efects can 1 GPU do over another.

In PC terms this is very easy, compare the features of a DirextX8 level GPU to a DirextX9 level GPU. The difference betwen the 2 is staggering. DirectX8 cant do HDR, Parrallex mapping, soft shadows, it also has weaker shaders meaning the effects are low quality. Now from looking at RSX and Xenos there are NO effects that 1 can do over another, RSX cant do any magic voodoo effects that Xenos cant do and vica versa.

The only thing that both GPUs differ on is there API, RSX runs on a super tweaked PS3 version of OpenGL-ES 2.0 and 360 runs on slightly tweaked DirextX9. This is were the GPUs start to differ, OpenGL has always given developers better access to a GPUs features and functions then DirectX ( Hence why most movie studios render programs are OpenGL based ) dirextX doesnt allow developers to access ALL of a GPU features and functions as is has to be able to accomodate thousands of PC combinations. This is a big problem on the PC platform as it meens that some ultra high end GPU is'nt useing all of its feature set, in other words that GPU is being WASTED. But being closed platforms meens that developers can "trick" the API into allowing the hardware to do effects that the GPU supports that the API does'nt, thats why we seen the original X-Box do amazing things with its Geforce 3 based GPU. X-Box developers tricked the API and unlocked most of the Geforce 3s feature set. OpenGL on the other hand give acces to ALL of the features of a GPU. We all know that microsoft updated 360s API and they call it DX9.5, that update just unlocked more of the GPUs effects that normal DX9 would'nt allow the developers to access. Its no DX10 but its DX9 either.

So how can PS3 or 360 produce effects that 1 machine could'nt do? the answear? Do it in SOFTWARE on the CPU. Thats were Cell comes in, Cell can add effects to RSX's feature list that could'nt be done on 360 because

1. It has'nt got enough spare CPU cycles to do the effect

2. Its CPU just cant do it AT ALL

360 has a hardware fucntion called MEMEXPORT, its simular to PS3s set-up as it allows 360s GPU to "take" a whole core off the game and use it for extra graphics processing, but this set-up while helpful it has'nt got anywere near the flexibility as PS3s Cell+RSX combo.

So in the end it will all boil down to how much Cell can actually do?, how many extra sepcial effects features can Cell add to RSX? That is what will make the difference between the 2 consoles, but it wont happen ovenight, it wil take YEARS befor we start to see if Cell really can add that extra sparkle to PS3 games. And in my opinion, judgeing by what we have seen Cell do already in Motorstorm, resistance, Lair, Warhawk, MGS4 id say Cell and PS3 graphical will be awsome.

mrboo15

nice, mate its good to you spent the time researching the cards before you started spouting. there are afew threads in this forum that are basically biased from the off, going on about how the 360 is far more superior. without knowing any facts about theGPUs

Avatar image for teuf_
Teuf_

30805

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#85 Teuf_
Member since 2004 • 30805 Posts
My god, so much mis-information....
Avatar image for deactivated-5f9e3c6a83e51
deactivated-5f9e3c6a83e51

57548

Forum Posts

0

Wiki Points

0

Followers

Reviews: 19

User Lists: 0

#86 deactivated-5f9e3c6a83e51
Member since 2004 • 57548 Posts
I think you guys are splitting hairs. Both systems are capable of high end and comprabable grahpics. One having a slight edge over the other is really not that significant.
Avatar image for teuf_
Teuf_

30805

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#87 Teuf_
Member since 2004 • 30805 Posts
Okay, a few quick things I'll clear up:

-Xenos is not a "generation ahead" of RSX. They're both very much DX9 parts, Xenos just has the benefit of unified shaders.

-4x MSAA on Xenos is not "free". Its free in the sense that it doesn't cause the bandwidth problem it does on the PS3, but in order to do any MSAA @ 720p you need to use tiled rendering (a 720p framebuffer with even 2xAA doesn't fit in the 10MB of eDRAM). Tiled rendering has its own set of headaches.

-The different API stuff you're talking about only applies to PC's, not to consoles. Being consoles, developers used versions of GL and DX specifically tailored to the hardware they're using.
Avatar image for Fusible
Fusible

2828

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#88 Fusible
Member since 2005 • 2828 Posts
See Unified shaders take away stress from the core, meaning it becomes more efficient in the long run. Why? All shaders are pushed thru a single instruction, instead of 2. On the RSX it's pushed thru 2 instructions, then brought into one to rpovide you with the image. With unified shaders it is run in thru one pipeline, instead of having seperate pipelines V&P shaders. This is why Unified shaders is more effective, and you can actually push out better graphics. Why do you think nVidia built the 8800 around unified shaders, because you can do alot more with it. So yes Xenos is superior in design, than the RSX. Period!
Avatar image for Make_me_win
Make_me_win

93

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#89 Make_me_win
Member since 2006 • 93 Posts

See Unified shaders take away stress from the core, meaning it becomes more efficient in the long run. Why? All shaders are pushed thru a single instruction, instead of 2. On the RSX it's pushed thru 2 instructions, then brought into one to rpovide you with the image. With unified shaders it is run in thru one pipeline, instead of having seperate pipelines V&P shaders. This is why Unified shaders is more effective, and you can actually push out better graphics. Why do you think nVidia built the 8800 around unified shaders, because you can do alot more with it. So yes Xenos is superior in design, than the RSX. Period!Fusible

Thats what I keep saying! If the Unified architecture is so "unimportant" why oh why is nVIDIA using it now on their GPUs? Because Unified Architecture is the future!

Avatar image for Macolele
Macolele

534

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#90 Macolele
Member since 2006 • 534 Posts
Xenos is better than RSX. However, we forgot RSX has more potential. Xenos's 200Gflop compare with RSX's 360GFlop. You can say that independent shader pipe is bottlenecked. But RSX will be unclocked by Cell. And we have 50% more than Xenos's power.
Avatar image for Make_me_win
Make_me_win

93

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#91 Make_me_win
Member since 2006 • 93 Posts

Xenos is better than RSX. However, we forgot RSX has more potential. Xenos's 200Gflop compare with RSX's 360GFlop. You can say that independent shader pipe is bottlenecked. But RSX will be unclocked by Cell. And we have 50% more than Xenos's power.Macolele

The PS2 also had an advantage over the Xbox on the GFLOP department, and you saw the smoking the Xbox gave the PS2 technologically.

Xenos architecture > RSX and that is what really matters.

Avatar image for Nagidar
Nagidar

6231

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#92 Nagidar
Member since 2006 • 6231 Posts

Xenos is better than RSX. However, we forgot RSX has more potential. Xenos's 200Gflop compare with RSX's 360GFlop. You can say that independent shader pipe is bottlenecked. But RSX will be unclocked by Cell. And we have 50% more than Xenos's power.Macolele

RSX vs Xenos

Triangle Setup
Xbox 360 - 500 Million Triangles/sec
PS3 - 250 Million Triangles/sec

Vertex Shader Processing
Xbox 360 - 6.0 Billion Vertices/sec (using all 48 Unified Pipelines)
Xbox 360 - 2.0 Billion Vertices/sec (using only 16 of the 48 Unified Pipelines)
Xbox 360 - 1.5 Billion Vertices/sec (using only 12 of the 48 Unified Pipelines)
Xbox 360 - 1.0 Billion Vertices/sec (using only 8 of the 48 Unified Pipelines)
PS3 - 1.0 Billion Vertices/sec

Filtered Texture Fetch
Xbox 360 - 8.0 Billion Texels/sec
PS3 - 12.0 Billion Texels/sec

Vertex Texture Fetch
Xbox 360 - 8.0 Billion Texels/sec
PS3 - 4.0 Billion Texels/sec

Pixel Shader Processing with 16 Filtered Texels Per Cycle (Pixel ALU x Clock)
Xbox 360 - 24.0 Billion Pixels/sec (using all 48 Unified Pipelines)
Xbox 360 - 20.0 Billion Pixels/sec (using 40 of the 48 Unified Pipelines)
Xbox 360 - 18.0 Billion Pixels/sec (using 36 of the 48 Unified Pipelines)
Xbox 360 - 16.0 Billion Pixels/sec (using 32 of the 48 Unified Pipelines)
PS3 - 16.0 Billion Pixels/sec

Pixel Shader Processing without Textures (Pixel ALU x Clock)
Xbox 360 - 24.0 Billion Pixels/sec (using all 48 Unified Pipelines)
Xbox 360 - 20.0 Billion Pixels/sec (using 40 of the 48 Unified Pipelines)
Xbox 360 - 18.0 Billion Pixels/sec (using 36 of the 48 Unified Pipelines)
Xbox 360 - 16.0 Billion Pixels/sec (using 32 of the 48 Unified Pipelines)
PS3 - 24.0 Billion Pixels/sec

Multisampled Fill Rate
Xbox 360 - 16.0 Billion Samples/sec (8 ROPS x 4 Samples x 500MHz)
PS3 - 8.0 Billion Samples/sec (8 ROPS x 2 Samples x 500MHz)

Pixel Fill Rate with 4x Multisampled Anti-Aliasing
Xbox 360 - 4.0 Billion Pixels/sec (8 ROPS x 4 Samples x 500MHz / 4)
PS3 - 2.0 Billion Pixels/sec (8 ROPS x 2 Samples x 500MHz / 4)

Pixel Fill Rate without Anti-Aliasing
Xbox 360 - 4.0 Billion Pixels/sec (8 ROPS x 500MHz)
PS3 - 4.0 Billion Pixels/sec (8 ROPS x 500MHz)