Well, that didn't take long, PC gets GDDR5 system RAM support

This topic is locked from further discussion.

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#601 tormentos
Member since 2003 • 33798 Posts

You people never cease to amaze me. GDDR5 has become the new cell. I give Sony credit. They definitely hype their products well. You guys are once again buying into the Sony hype train like when they unveiled the ps3 and the cell. Look, I understand console gamers are excited for the next gen consoles but don't start acting like you know all about hardware. Taking about raw flops only makes you look bad as current PC hardware beats it today. It's an exciting time to be a gamer as next gen consoles have taken too long to come out and the current gen has stifled growth with its outdated hardware. I want new experiences in gaming as this generation is burning me out with all the rehashes we have seen. Better hardware definitely gives developers a chance to be more creative. sirk1264

 

NO one is even talking about GDDR5 any more.

And no one is doing anything,the only thing been say here is that the PS4 can match and surpass a 7870 thanks to been efficient as well as many other pro it has on his side,and that to beat the PS4 something more powerful like the 7970 is need it.

Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#602 04dcarraher
Member since 2004 • 23859 Posts

[QUOTE="sirk1264"]You people never cease to amaze me. GDDR5 has become the new cell. I give Sony credit. They definitely hype their products well. You guys are once again buying into the Sony hype train like when they unveiled the ps3 and the cell. Look, I understand console gamers are excited for the next gen consoles but don't start acting like you know all about hardware. Taking about raw flops only makes you look bad as current PC hardware beats it today. It's an exciting time to be a gamer as next gen consoles have taken too long to come out and the current gen has stifled growth with its outdated hardware. I want new experiences in gaming as this generation is burning me out with all the rehashes we have seen. Better hardware definitely gives developers a chance to be more creative. tormentos

 

NO one is even talking about GDDR5 any more.

And no one is doing anything,the only thing been say here is that the PS4 can match and surpass a 7870 thanks to been efficient as well as many other pro it has on his side,and that to beat the PS4 something more powerful like the 7970 is need it.

Your clueless the PS4 would be very very lucky to match a 7870, A pc with an AMD 6-8core 3ghz or any modern intel quad with a 7870 will outpace the PS4. And you suggesting that you need a 7970 to beat the PS4 is hilarious. Its no different fom comparing a geforce 8800 vs 360/PS3 where optimization and efficiency still hasnt been able to allow nether console out doing the nearly 7 year old gpu.
Avatar image for MK-Professor
MK-Professor

4218

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#603 MK-Professor
Member since 2009 • 4218 Posts

Tormentos is still mad that the ps4 is getting a low-end hardware in comparison with the xbox back in 2005.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#604 ronvalencia
Member since 2008 • 29612 Posts

So if you care enough about this GDDR5 thing, you will be able to have it on a PC.

Long story short: when DDR4 arrives, GDDR5 will be rendered irrelevant. Quote:

"Once the DDR4 memory comes along the advantages of GDDR5 start to diminish again. DDR4 will enable higher densities at comparable speeds as well as upgradeable modules which might be more desirable for customers."

Also

GDDR6 Memory Coming in 2014

IgGy621985

Both DDR4 and GDDR5M would be sharing a common SODIMM slot standard.

GDDR5_M_SODIMM.jpg

AMD Kaveri APU looking good. I guess Intel can also use GDDR5M memory modules.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#605 ronvalencia
Member since 2008 • 29612 Posts

Tormentos is still mad that the ps4 is getting a low-end hardware in comparison with the xbox back in 2005.

MK-Professor
2013 TDP levels are not the same as late 2005.
Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#606 04dcarraher
Member since 2004 • 23859 Posts
[QUOTE="MK-Professor"]

Tormentos is still mad that the ps4 is getting a low-end hardware in comparison with the xbox back in 2005.

ronvalencia
2013 TDP levels are not the same as late 2005.

yep top end gpu in 2005/2006 100w+ TDP todays top end gpu's are 200w+ TDP
Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#607 tormentos
Member since 2003 • 33798 Posts

Your clueless the PS4 would be very very lucky to match a 7870, A pc with an AMD 6-8core 3ghz or any modern intel quad with a 7870 will outpace the PS4. And you suggesting that you need a 7970 to beat the PS4 is hilarious. Its no different fom comparing a geforce 8800 vs 360/PS3 where optimization and efficiency still hasnt been able to allow nether console out doing the nearly 7 year old gpu. 04dcarraher

 

Yes so is John Carmak which claim 100% efficiency on consoles over PC,so is AMD for saying Windows API is screw efficiency and create latency,wait Timothy Lottes who invented a god damn algorithm for AA,also state the same but is me who is day dreaming...

 

A guy who works with GPU for a living and created an algorihtm for AA is inventing crap to favor sony ..Your silly yeah just like the 7800GTX with 512MB of ram outpace the PS3 right.?

And to think the PS3 had as much memory for the complete console as the 7800GTX had for it self,with more bandwith as well..

 

Avatar image for mitu123
mitu123

155290

Forum Posts

0

Wiki Points

0

Followers

Reviews: 32

User Lists: 0

#608 mitu123
Member since 2006 • 155290 Posts

[QUOTE="sirk1264"]You people never cease to amaze me. GDDR5 has become the new cell. I give Sony credit. They definitely hype their products well. You guys are once again buying into the Sony hype train like when they unveiled the ps3 and the cell. Look, I understand console gamers are excited for the next gen consoles but don't start acting like you know all about hardware. Taking about raw flops only makes you look bad as current PC hardware beats it today. It's an exciting time to be a gamer as next gen consoles have taken too long to come out and the current gen has stifled growth with its outdated hardware. I want new experiences in gaming as this generation is burning me out with all the rehashes we have seen. Better hardware definitely gives developers a chance to be more creative. tormentos

 

NO one is even talking about GDDR5 any more.

And no one is doing anything,the only thing been say here is that the PS4 can match and surpass a 7870 thanks to been efficient as well as many other pro it has on his side,and that to beat the PS4 something more powerful like the 7970 is need it.

You really are hyping the PS4's gpu way, way too much.:| Efficiently is the only thing it has going for it, not raw power, it is still a 7850 modified even and to beat a 400 card even though the 7850 is 200 some bucks in itself is pretty absurd, not to mention overclocking and such.

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#609 tormentos
Member since 2003 • 33798 Posts

Tormentos is still mad that the ps4 is getting a low-end hardware in comparison with the xbox back in 2005.

MK-Professor

 

Xenos 500 million polygons unified shader..

RSX 275 million polygons..

If anything this ^^ should be a lesson for you on how something working more efficient can actually top something stronger while been weaker.

And the incredible thing is that Beyond 2 souls and The Last of US actually look better than anything on 360..

And that was on a horrible console to code,imagine now how things will be when the PS4 is way easier and much more efficient..

Avatar image for MK-Professor
MK-Professor

4218

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#610 MK-Professor
Member since 2009 • 4218 Posts

[QUOTE="MK-Professor"]

Tormentos is still mad that the ps4 is getting a low-end hardware in comparison with the xbox back in 2005.

tormentos

 

Xenos 500 million polygons unified shader..

RSX 275 million polygons..

If anything this ^^ should be a lesson for you on how something working more efficient can actually top something stronger while been weaker.

And the incredible thing is that Beyond 2 souls and The Last of US actually look better than anything on 360..

And that was on a horrible console to code,imagine now how things will be when the PS4 is way easier and much more efficient..

keep dreaming...

Also why you compare the Xenos vs RSX and not Xenos vs RSX+cell? Simply because you try desperately to support your stupid argument.

Avatar image for Martin_G_N
Martin_G_N

2124

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#611 Martin_G_N
Member since 2006 • 2124 Posts

Seeing what devs have done with the PS3 and the X360, and the previous consoles aswell, I think it's wrong of us to use current PC games and GPU's to predict how the PS4 is going to perform. The GPU is powerfull enough to utilize all the features in the CryEngine 3, and that is the most advanced engine out there.  Since the PS4 is this powerfull, more skilled devs will get to use those same graphics features, and we will get tons of great looking games. And the CPU is maybe at a low clock, but it's an out of order execution CPU, so it's still better per core than the X360 and PS3's CPU. And for games I think that is enough. 

PC gets faster system RAM is'nt a surprise, but it takes a while before every PC has it.

Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#612 04dcarraher
Member since 2004 • 23859 Posts

[QUOTE="04dcarraher"]Your clueless the PS4 would be very very lucky to match a 7870, A pc with an AMD 6-8core 3ghz or any modern intel quad with a 7870 will outpace the PS4. And you suggesting that you need a 7970 to beat the PS4 is hilarious. Its no different fom comparing a geforce 8800 vs 360/PS3 where optimization and efficiency still hasnt been able to allow nether console out doing the nearly 7 year old gpu. tormentos

 

Yes so is John Carmak which claim 100% efficiency on consoles over PC,so is AMD for saying Windows API is screw efficiency and create latency,wait Timothy Lottes who invented a god damn algorithm for AA,also state the same but is me who is day dreaming...

 

A guy who works with GPU for a living and created an algorihtm for AA is inventing crap to favor sony ..Your silly yeah just like the 7800GTX with 512MB of ram outpace the PS3 right.?

And to think the PS3 had as much memory for the complete console as the 7800GTX had for it self,with more bandwith as well..

 

It wasnt  about efficiency, he talked about optimization , and his comment was a generalized statement and did not state the what and why console optimization allows a set peice of hardware perform better then equal pc hardware. And of course you totally ignore that optimization , includies the usage of lower quality assets to allow a set standard. and you also ignore that console efficiency means nothing on pc when you have a fast enough cpu to overcome the small overhead with directx 11 (and not directx 9)...

Also that article with the guy with FXAA was talking about texture fetching could be upto 30% faster if Sony gets in  and redesigns the coding .... not the whole consoles performance again you totally missed that point too.  Your an wondering Sony fanboy all you need to do is look at 7800GTX with any multiplat games in its lifespan the or even RE5 from 2009, 7800GTX was as fast and had better textures along with resolutions.

Avatar image for Mr720fan
Mr720fan

2795

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#613 Mr720fan
Member since 2013 • 2795 Posts

i dont think its a big deal really, just cows need something to make them feel good after losing out this gen.

Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#614 04dcarraher
Member since 2004 • 23859 Posts

Comparing apples to apples

, AMD's Jaguar architecture is the impending successor to the "Bobcat" architecture found in the company's current low-power APUs, and it is not especially beefy. While the idea of an octa-core console sounds dreamy on the surface, the illusion is shattered when you realize that on the PC side of things, Jaguar APUs will be modest processors targeted at tablets, high-end netbooks (ha!), and entry-level laptops.

In other words, the PlayStation 4's CPU performance isn't likely to rock your socks compared to a PC sporting an AMD Piledriver- or Bulldozer-based processor. It might not even trump a lowly Intel Core i3 processor, especially if Eurogamer's early PlayStation 4 leaks continue to prove accurate and those eight cores are clocked at 1.6GHz.

Then there's the GPU. 1.84 teraflops of performance puts the GPU just ahead of the Radeon HD 7850 and well under the Radeon 7870. That also holds true since the PlayStation 4 GPU's 18 compute units sport a build similar to the GCN architecture(ie GCN 2) used to build AMD's Radeon HD 7000-series graphics cards.

The Radeon HD 7850 is nothing to sneeze at. Indeed, if you're looking for a midrange video card, it's a good balanced option. But it's still just a midrange card, not a graphical trail blazerand yet it will form the backbone of the PlayStation 4's gaming chops for years to come.

Overall, if you compare its hardware to what's available in today's PC landscape, the PlayStation 4 is basically powered by a low-end CPU and a midrange GPU. It even packs a mechanical hard drive in an age when many PC gamers have moved on to lightning-quick solid-state drives or even multiple mechanical hard drives.

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#615 tormentos
Member since 2003 • 33798 Posts

You really are hyping the PS4's gpu way, way too much.:| Efficiently is the only thing it has going for it, not raw power, it is still a 7850 modified even and to beat a 400 card even though the 7850 is 200 some bucks in itself is pretty absurd, not to mention overclocking and such.

mitu123

Am i.?

Lets recap some of the advantages the PS4 has over PC with facts rather than opinions.

The PS4 doesn't run windows or its API which do impact performance.

fz868m.jpg

294tbhg.jpg

The PS4 has a chip to handle the OS which mean no background task more free resources,on a CPU that already doesn't have to run a bloated OS.

The PS4 has the CPU and GPU on the same die with shared memory,which solve quite a few problems PC have like this.

Today, a growing number of mainstream applications require the high performance and power efficiency achievable only through such highly parallel computation. But current CPUs and GPUs have been designed as separate processing elements and do not work together efficiently and are cumbersome to program. Each has a separate memory space, requiring an application to explicitly copy data from CPU to GPU and then back again.

A program running on the CPU queues work for the GPU using system calls through a device driver stack managed by a completely separate scheduler. This introduces significant dispatch latency, with overhead that makes the process worthwhile only when the application requires a very large amount of parallel computation. Further, if a program running on the GPU wants to directly generate work-items, either for itself or for the CPU, it is impossible today!

http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/

So programs running on the GPU can't generate for them self or the CPU they can't.? Wait the PS4 will be able to do that..interesting..

The PS4 can be coded to the Metal,something on PC you can't do without making your game complete imcompatible even with models from the same vendor.

The PS4 GPU is a modify 78XX it has 18 unified CU than can be use as developers best fit,they are more flexible and efficient than your average CU.

Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#616 04dcarraher
Member since 2004 • 23859 Posts

OMG you still ignore any overhead is processed by the cpu :lol:

lets just ignore the fact that modern pc cpu's are multiple times faster then whats in the PS4 overcoming any API overhead :lol:

Avatar image for mitu123
mitu123

155290

Forum Posts

0

Wiki Points

0

Followers

Reviews: 32

User Lists: 0

#617 mitu123
Member since 2006 • 155290 Posts

[QUOTE="mitu123"]

You really are hyping the PS4's gpu way, way too much.:| Efficiently is the only thing it has going for it, not raw power, it is still a 7850 modified even and to beat a 400 card even though the 7850 is 200 some bucks in itself is pretty absurd, not to mention overclocking and such.

tormentos

Am i.?

Lets recap some of the advantages the PS4 has over PC with facts rather than opinions.

The PS4 doesn't run windows or its API which do impact performance.

 

 

 

 

The PS4 has a chip to handle the OS which mean no background task more free resources,on a CPU that already doesn't have to run a bloated OS.

The PS4 has the CPU and GPU on the same die with shared memory,which solve quite a few problems PC have like this.

Today, a growing number of mainstream applications require the high performance and power efficiency achievable only through such highly parallel computation. But current CPUs and GPUs have been designed as separate processing elements and do not work together efficiently and are cumbersome to program. Each has a separate memory space, requiring an application to explicitly copy data from CPU to GPU and then back again.

A program running on the CPU queues work for the GPU using system calls through a device driver stack managed by a completely separate scheduler. This introduces significant dispatch latency, with overhead that makes the process worthwhile only when the application requires a very large amount of parallel computation. Further, if a program running on the GPU wants to directly generate work-items, either for itself or for the CPU, it is impossible today!

http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/

So programs running on the GPU can't generate for them self or the CPU they can't.? Wait the PS4 will be able to do that..interesting..

The PS4 can be coded to the Metal,something on PC you can't do without making your game complete imcompatible even with models from the same vendor.

The PS4 GPU is a modify 78XX it has 18 unified CU than can be use as developers best fit,they are more flexible and efficient than your average CU.

And yet none of that makes it near a 7970...

Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#618 04dcarraher
Member since 2004 • 23859 Posts

[QUOTE="tormentos"]

[QUOTE="mitu123"]

You really are hyping the PS4's gpu way, way too much.:| Efficiently is the only thing it has going for it, not raw power, it is still a 7850 modified even and to beat a 400 card even though the 7850 is 200 some bucks in itself is pretty absurd, not to mention overclocking and such.

mitu123

I am

 

And yet none of that makes it near a 7970...

Or even a 7870 for that matter, as long as you have the cpu power to process "the overhead" the gpu does not suffer. Take current multiplat games still only requiring dual core cpu's from 2006, all this nonsense taking about API overhead but still ignores the cpu's take care of these things. and a low clocked 8 core jag based cpu isnt going to hurt pc if anything improve the multithreaded coding forward.

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#619 tormentos
Member since 2003 • 33798 Posts

Comparing apples to apples

, AMD's Jaguar architecture is the impending successor to the "Bobcat" architecture found in the company's current low-power APUs, and it is not especially beefy. While the idea of an octa-core console sounds dreamy on the surface, the illusion is shattered when you realize that on the PC side of things, Jaguar APUs will be modest processors targeted at tablets, high-end netbooks (ha!), and entry-level laptops.

In other words, the PlayStation 4's CPU performance isn't likely to rock your socks compared to a PC sporting an AMD Piledriver- or Bulldozer-based processor. It might not even trump a lowly Intel Core i3 processor, especially if Eurogamer's early PlayStation 4 leaks continue to prove accurate and those eight cores are clocked at 1.6GHz.

Then there's the GPU. 1.84 teraflops of performance puts the GPU just ahead of the Radeon HD 7850 and well under the Radeon 7870. That also holds true since the PlayStation 4 GPU's 18 compute units sport a build similar to the GCN architecture(ie GCN 2) used to build AMD's Radeon HD 7000-series graphics cards.

The Radeon HD 7850 is nothing to sneeze at. Indeed, if you're looking for a midrange video card, it's a good balanced option. But it's still just a midrange card, not a graphical trail blazerand yet it will form the backbone of the PlayStation 4's gaming chops for years to come.

Overall, if you compare its hardware to what's available in today's PC landscape, the PlayStation 4 is basically powered by a low-end CPU and a midrange GPU. It even packs a mechanical hard drive in an age when many PC gamers have moved on to lightning-quick solid-state drives or even multiple mechanical hard drives.

04dcarraher

Ok Apple to Apple then..

You are right Jaguar are not ultra powerful CPU,but they don't have to because when you compare Windows which some of you feed 8GB of memory or even more to it,the PS4 will run a very light OS and stream line system design,remember apple to apple so the Jaguar on PS4 will not need to even break a suet on OS it has a chip for that,and when you consider that most games are actually GPU bound and not CPU bound it makes even more sense.

But Apple to apples the jaguar on PS4 doesn't have the need to send the information over a bus to the GPU and then wait for it to get back,on the PS4 the CPU and GPU are on the same die with shared memory.

The 7850 vs 7870 crap.. The PS4 is not well under the 7870 unless you are talking TF crap that mean nothing on consoles,having 2+ TF mean sh** the 7850 has 1.76 and the 7870 beat it by a very small margin in some games no even by 5 frames,yeah that is well under,you consistenly ignore benchmarks,open quotes from well know people from the industry in your endless quest to downplay the PS4.

Is a mid range GPU that will perform better than mid range,and since i unlike you know that optimisation doesn't mean lowring everything (that is a trade off) i know the PS4 will be able to beat the 7870,you can mark this quote 5 years from now the 7870 would be choking on games the PS4 would be running find.

So what advantages does multiple mechanical drive have over the PS4 HDD.? Other than space.? Because SSD at least give a boost in speed,but SSD is very expensive,and considering that you people actually claim PC gaming is not expensive that a few dollar rigs can beat consoles,SSD is a killer blow to the whole argument a 128GB SSD is like $150,with that i buy a 1+TB HDD for my PS4.

SSD will not deliver any advantage other than faster loading,if your PC can keed the GPU well fedd.

Avatar image for Martin_G_N
Martin_G_N

2124

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#620 Martin_G_N
Member since 2006 • 2124 Posts

Comparing apples to apples

, AMD's Jaguar architecture is the impending successor to the "Bobcat" architecture found in the company's current low-power APUs, and it is not especially beefy. While the idea of an octa-core console sounds dreamy on the surface, the illusion is shattered when you realize that on the PC side of things, Jaguar APUs will be modest processors targeted at tablets, high-end netbooks (ha!), and entry-level laptops.

In other words, the PlayStation 4's CPU performance isn't likely to rock your socks compared to a PC sporting an AMD Piledriver- or Bulldozer-based processor. It might not even trump a lowly Intel Core i3 processor, especially if Eurogamer's early PlayStation 4 leaks continue to prove accurate and those eight cores are clocked at 1.6GHz.

Then there's the GPU. 1.84 teraflops of performance puts the GPU just ahead of the Radeon HD 7850 and well under the Radeon 7870. That also holds true since the PlayStation 4 GPU's 18 compute units sport a build similar to the GCN architecture(ie GCN 2) used to build AMD's Radeon HD 7000-series graphics cards.

The Radeon HD 7850 is nothing to sneeze at. Indeed, if you're looking for a midrange video card, it's a good balanced option. But it's still just a midrange card, not a graphical trail blazerand yet it will form the backbone of the PlayStation 4's gaming chops for years to come.

Overall, if you compare its hardware to what's available in today's PC landscape, the PlayStation 4 is basically powered by a low-end CPU and a midrange GPU. It even packs a mechanical hard drive in an age when many PC gamers have moved on to lightning-quick solid-state drives or even multiple mechanical hard drives.

04dcarraher

The Jaguar CPU is still better than the X360 and PS3's CPU, since it's an out of order execution CPU.....correct? It's not exactly the CPU that has been the real bottleneck on the current gen consoles. And if we consider all the processing that the PS4's CPU no longer has to do compared to the Cell (graphics, physics, and even the sound processing is done by it's own processor this time), the CPU is probably good enough for a console. They have done a few key things to remove bottlenecks that previous consoles had, and the 8GB GDDR5 is one of them. The GPU is around the power that we could have expected next gen, and it will give us more great games that utilize DX11 features, and also new graphics features that will be developed. It also may pack a mechanical HDD (that I hope is changable like in the PS3), but it will also have an internal storage device for OS and maybe some extra caching perhaps??

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#621 tormentos
Member since 2003 • 33798 Posts

And yet none of that makes it near a 7970...

mitu123

In what part i say it would be near 7970.?

Because what i claim is that you will need a 7970 or better to beat the PS4,not to match it..:roll:

Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#622 04dcarraher
Member since 2004 • 23859 Posts

[QUOTE="mitu123"]

And yet none of that makes it near a 7970...

tormentos

 

In what part i say it would be near 7970.?

 

Because what i claim is that you will need a 7970 or better to beat the PS4,not to match it..:roll:

and you will still be wrong, you would only need a 7870xt or 7950 to clearly beat the PS4

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#623 tormentos
Member since 2003 • 33798 Posts

...

Also why you compare the Xenos vs RSX and not Xenos vs RSX+cell? Simply because you try desperately to support your stupid argument.

MK-Professor

Wait wait wait..

You people have a complete generation talking crap and still do even on this thread about how Cell was vaporware,how GDDR5 is the new Cell,that cell was crap and now all of a sudden Cell is great and actually is to blame for the graphics parity both consoles basically have.?You can't have it both ways.

But that is ok because you basically validated a smart design over brute force,the GPU on the 360 was extremely powerful compare to the weaker RSX,something even i have admit,but the RSX was able to keep up with it,just by having the CPU offload some task,which translated into the GPU actually reaching closer to its peak,while the Xenos been more capable ran everything it self but at a cost,because every thing the xenos had to do for it self mean it got further away from that peak.

Avatar image for tormentos
tormentos

33798

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#624 tormentos
Member since 2003 • 33798 Posts

and you will still be wrong, you would only need a 7870xt or 7950 to clearly beat the PS4

04dcarraher

Will see about soon enough,thankfully is not just mean who thinks consoles are more efficient than PC,some much more credible people than both of us do.

Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#626 04dcarraher
Member since 2004 • 23859 Posts

[QUOTE="04dcarraher"]

and you will still be wrong, you would only need a 7870xt or 7950 to clearly beat the PS4

tormentos

Will see about soon enough,thankfully is not just mean who thinks consoles are more efficient than PC,some much more credible people than both of us do.

being more efficient vs sheer processing power. Your efficiency arguments are flawed since all API overhead is all done on cpu's as long the cpu are fast enough you have no overhead to worry about. Now if you where comparing similar low clock cpu's and or much older based dual or quad core based cpu's then yes you would have a point. but the fact is that even with current console based multiplat games only require same era(2005/2006) based dual core cpu's around 3 ghz range to run the games as well as consoles, even with the massive overhead you keep on spouting about.
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#627 ronvalencia
Member since 2008 • 29612 Posts

Comparing apples to apples

, AMD's Jaguar architecture is the impending successor to the "Bobcat" architecture found in the company's current low-power APUs, and it is not especially beefy. While the idea of an octa-core console sounds dreamy on the surface, the illusion is shattered when you realize that on the PC side of things, Jaguar APUs will be modest processors targeted at tablets, high-end netbooks (ha!), and entry-level laptops.

In other words, the PlayStation 4's CPU performance isn't likely to rock your socks compared to a PC sporting an AMD Piledriver- or Bulldozer-based processor. It might not even trump a lowly Intel Core i3 processor, especially if Eurogamer's early PlayStation 4 leaks continue to prove accurate and those eight cores are clocked at 1.6GHz.

04dcarraher

From http://www.sweclockers.com/nyhet/16597-amd-temash-specifikationer-och-prestanda

LL

From http://www.h-online.com/newsticker/news/item/Processor-Whispers-About-Giants-and-Slingshots-1809573.html

AMD Temash SoC ranges from 5 watts to 25 watts.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#628 ronvalencia
Member since 2008 • 29612 Posts

Tormentos is still mad that the ps4 is getting a low-end hardware in comparison with the xbox back in 2005.

MK-Professor

A GCN with 18 CUs is not low end hardware i.e. it's not Bonaire XT (aka Radeon HD 7790) .

http://wccftech.com/amd-reported-preparing-radeon-hd-7790-based-bonaire-xt-gpu-launches-april/

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#629 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="mitu123"]

You really are hyping the PS4's gpu way, way too much.:| Efficiently is the only thing it has going for it, not raw power, it is still a 7850 modified even and to beat a 400 card even though the 7850 is 200 some bucks in itself is pretty absurd, not to mention overclocking and such.

tormentos

Am i.?

Lets recap some of the advantages the PS4 has over PC with facts rather than opinions.

The PS4 doesn't run windows or its API which do impact performance.

fz868m.jpg

294tbhg.jpg

The PS4 has a chip to handle the OS which mean no background task more free resources,on a CPU that already doesn't have to run a bloated OS.

The PS4 has the CPU and GPU on the same die with shared memory,which solve quite a few problems PC have like this.

Today, a growing number of mainstream applications require the high performance and power efficiency achievable only through such highly parallel computation. But current CPUs and GPUs have been designed as separate processing elements and do not work together efficiently and are cumbersome to program. Each has a separate memory space, requiring an application to explicitly copy data from CPU to GPU and then back again.

A program running on the CPU queues work for the GPU using system calls through a device driver stack managed by a completely separate scheduler. This introduces significant dispatch latency, with overhead that makes the process worthwhile only when the application requires a very large amount of parallel computation. Further, if a program running on the GPU wants to directly generate work-items, either for itself or for the CPU, it is impossible today!

http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/

So programs running on the GPU can't generate for them self or the CPU they can't.? Wait the PS4 will be able to do that..interesting..

The PS4 can be coded to the Metal,something on PC you can't do without making your game complete imcompatible even with models from the same vendor.

The PS4 GPU is a modify 78XX it has 18 unified CU than can be use as developers best fit,they are more flexible and efficient than your average CU.

On the PC, batching would be important.

Lines-of-Code-and-Performance_zpsa23acdf

Among the GPU software stack, the performance remains similar even with different overheads. Low overhead guarantees consistent performance with low end CPUs.

On the PC, Intel Core i7-3770 class would be required for best performance for AMD GCN e.g. AMD supplied Intel Core i7-3770K+motherboard for Radeon HD 8790M review. http://techreport.com/review/24086/a-first-look-at-amd-radeon-hd-8790m

Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#630 savagetwinkie
Member since 2008 • 7981 Posts
[QUOTE="tormentos"]

[QUOTE="04dcarraher"]

and you will still be wrong, you would only need a 7870xt or 7950 to clearly beat the PS4

04dcarraher

Will see about soon enough,thankfully is not just mean who thinks consoles are more efficient than PC,some much more credible people than both of us do.

being more efficient vs sheer processing power. Your efficiency arguments are flawed since all API overhead is all done on cpu's as long the cpu are fast enough you have no overhead to worry about. Now if you where comparing similar low clock cpu's and or much older based dual or quad core based cpu's then yes you would have a point. but the fact is that even with current console based multiplat games only require same era(2005/2006) based dual core cpu's around 3 ghz range to run the games as well as consoles, even with the massive overhead you keep on spouting about.

its not all CPU based inefficiency, the GPU isn't being used as well as it could be because the drivers are more generic, and regardless of how fast CPU's are, the extra overhead creates latency.
Avatar image for lostrib
lostrib

49999

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#631 lostrib
Member since 2009 • 49999 Posts

[QUOTE="04dcarraher"]

and you will still be wrong, you would only need a 7870xt or 7950 to clearly beat the PS4

tormentos

Will see about soon enough,thankfully is not just mean who thinks consoles are more efficient than PC,some much more credible people than both of us do.

Being more efficient is not the same as being more powerful

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#632 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="04dcarraher"][QUOTE="tormentos"]

Will see about soon enough,thankfully is not just mean who thinks consoles are more efficient than PC,some much more credible people than both of us do.

savagetwinkie

being more efficient vs sheer processing power. Your efficiency arguments are flawed since all API overhead is all done on cpu's as long the cpu are fast enough you have no overhead to worry about. Now if you where comparing similar low clock cpu's and or much older based dual or quad core based cpu's then yes you would have a point. but the fact is that even with current console based multiplat games only require same era(2005/2006) based dual core cpu's around 3 ghz range to run the games as well as consoles, even with the massive overhead you keep on spouting about.

its not all CPU based inefficiency, the GPU isn't being used as well as it could be because the drivers are more generic, and regardless of how fast CPU's are, the extra overhead creates latency.

Again,

Lines-of-Code-and-Performance_zpsa23acdf

Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#633 savagetwinkie
Member since 2008 • 7981 Posts

[QUOTE="savagetwinkie"][QUOTE="04dcarraher"] being more efficient vs sheer processing power. Your efficiency arguments are flawed since all API overhead is all done on cpu's as long the cpu are fast enough you have no overhead to worry about. Now if you where comparing similar low clock cpu's and or much older based dual or quad core based cpu's then yes you would have a point. but the fact is that even with current console based multiplat games only require same era(2005/2006) based dual core cpu's around 3 ghz range to run the games as well as consoles, even with the massive overhead you keep on spouting about.ronvalencia

its not all CPU based inefficiency, the GPU isn't being used as well as it could be because the drivers are more generic, and regardless of how fast CPU's are, the extra overhead creates latency.

Again,

Lines-of-Code-and-Performance_zpsa23acdf

again + graph means **** wtf does measuring lines of code have to do with anything?

Avatar image for 04dcarraher
04dcarraher

23859

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#634 04dcarraher
Member since 2004 • 23859 Posts

[QUOTE="ronvalencia"]

[QUOTE="savagetwinkie"] its not all CPU based inefficiency, the GPU isn't being used as well as it could be because the drivers are more generic, and regardless of how fast CPU's are, the extra overhead creates latency.savagetwinkie

Again,

 

 

again + graph means **** wtf does measuring lines of code have to do with anything?

its showing you that your "drivers are more generic" causing latency is wrong in this day and age with modern OS's,drivers and API's.

Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#635 savagetwinkie
Member since 2008 • 7981 Posts

[QUOTE="savagetwinkie"]

[QUOTE="ronvalencia"]

Again,

04dcarraher

again + graph means **** wtf does measuring lines of code have to do with anything?

its showing you that your "drivers are more generic" causing latency is wrong in this day and age with modern OS's,drivers and API's.

ughh no, its not showing that at all,

also, drivers are more generic, and overhead causing latency are too different issues

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#636 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="04dcarraher"]

[QUOTE="savagetwinkie"] again + graph means **** wtf does measuring lines of code have to do with anything?

savagetwinkie

its showing you that your "drivers are more generic" causing latency is wrong in this day and age with modern OS's,drivers and API's.

ughh no, its not showing that at all,

Did you missed the "Launch", "Copy" and "Copy Back" sections?

The point of reference is the CPU bar and compare it against the others. "HSA Bolt" is almost as good as the normal CPU.

Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#637 savagetwinkie
Member since 2008 • 7981 Posts

[QUOTE="savagetwinkie"][QUOTE="04dcarraher"] its showing you that your "drivers are more generic" causing latency is wrong in this day and age with modern OS's,drivers and API's.

ronvalencia

ughh no, its not showing that at all,

It shows the overheads e.g. copy and compile (JIT recomplier on the driver side).

Did you missed the "Launch" and "Copy" sections?

The point of reference is the CPU bar and compare it against the others. "HSA Bolt" is almost as good as the normal CPU.

And there is nothing about graphics APIs, where the overhead is. They are showing opencl, the charts don't really say anything about the algorithms or what they are used for. You are showing HSA having the same Lines of code as a CPU but a lot more performance, what does this have to do with directx overhead? Directx add's a lot of kernel calls which eats up cpu time on the event loop, it has to buffer a lot of the calls especially in batch so they take more resources in memory, and the calls are generally generic from games so they aren't optimized per hardware.
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#638 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="ronvalencia"]

[QUOTE="savagetwinkie"] ughh no, its not showing that at all, savagetwinkie

It shows the overheads e.g. copy and compile (JIT recomplier on the driver side).

Did you missed the "Launch" and "Copy" sections?

The point of reference is the CPU bar and compare it against the others. "HSA Bolt" is almost as good as the normal CPU.

And there is nothing about graphics APIs, where the overhead is. They are showing opencl, the charts don't really say anything about the algorithms or what they are used for. You are showing HSA having the same Lines of code as a CPU but a lot more performance, what does this have to do with directx overhead? Directx add's a lot of kernel calls which eats up cpu time on the event loop, it has to buffer a lot of the calls especially in batch so they take more resources in memory, and the calls are generally generic from games so they aren't optimized per hardware.

Current C++ AMP runs on top of DX's Compute Shaders. HSA Bolt show it has removed "Copy" and "Copy Back".

Direct3D doesn't directly run on GPUs i.e. it has go through driver's JIT recomplier.

Direct3D can be optimized per GPU family e.g. GCN (Gaming Evolved) vs Kepler (TWIMTBP).

Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#639 savagetwinkie
Member since 2008 • 7981 Posts
[QUOTE="savagetwinkie"][QUOTE="ronvalencia"]

It shows the overheads e.g. copy and compile (JIT recomplier on the driver side).

Did you missed the "Launch" and "Copy" sections?

The point of reference is the CPU bar and compare it against the others. "HSA Bolt" is almost as good as the normal CPU.

ronvalencia
And there is nothing about graphics APIs, where the overhead is. They are showing opencl, the charts don't really say anything about the algorithms or what they are used for. You are showing HSA having the same Lines of code as a CPU but a lot more performance, what does this have to do with directx overhead? Directx add's a lot of kernel calls which eats up cpu time on the event loop, it has to buffer a lot of the calls especially in batch so they take more resources in memory, and the calls are generally generic from games so they aren't optimized per hardware.

Current C++ AMP runs on top of DX's Compute Shaders.

Ok what does lines of code/performance ratio have to do with overhead? This doesn't seem to be measuring performance/overhead Secondly DirectCompute is based on DirectX but how related is it to gaming overhead?
Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#640 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="ronvalencia"][QUOTE="savagetwinkie"] And there is nothing about graphics APIs, where the overhead is. They are showing opencl, the charts don't really say anything about the algorithms or what they are used for. You are showing HSA having the same Lines of code as a CPU but a lot more performance, what does this have to do with directx overhead? Directx add's a lot of kernel calls which eats up cpu time on the event loop, it has to buffer a lot of the calls especially in batch so they take more resources in memory, and the calls are generally generic from games so they aren't optimized per hardware.savagetwinkie
Current C++ AMP runs on top of DX's Compute Shaders.

Ok what does lines of code/performance ratio have to do with overhead? This doesn't seem to be measuring performance/overhead Secondly DirectCompute is based on DirectX but how related is it to gaming overhead?

Direct3D games are not even complied for the GPU ISA i.e. needs to be JIT recomplied before the GPU can executes it.

AMD_IL_APP.jpg

From http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/

A program running on the CPU queues work for the GPU using system calls through a device driver stack managed by a completely separate scheduler. This introduces significant dispatch latency, with overhead that makes the process worthwhile only when the application requires a very large amount of parallel computation.

Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#641 savagetwinkie
Member since 2008 • 7981 Posts

[QUOTE="savagetwinkie"][QUOTE="ronvalencia"] Current C++ AMP runs on top of DX's Compute Shaders.ronvalencia

Ok what does lines of code/performance ratio have to do with overhead? This doesn't seem to be measuring performance/overhead Secondly DirectCompute is based on DirectX but how related is it to gaming overhead?

Direct3D games are not even complied for the GPU ISA i.e. needs to be JIT recomplied before the GPU executes it.

AMD_IL_APP.jpg

This has nothing to do with the overhead. Seroiusly stop posting random **** you posted a graph that basically says, I won't have to type as much to get direct compute performance using HSA. Its not related to the overhead at all. This has even less to do comparing overhead of directx with console implementations of a graphics API

edit: back to the original question, what does a graph of lines of code compared to performance have to do with overhead?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#642 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="ronvalencia"]

[QUOTE="savagetwinkie"] Ok what does lines of code/performance ratio have to do with overhead? This doesn't seem to be measuring performance/overhead Secondly DirectCompute is based on DirectX but how related is it to gaming overhead? savagetwinkie

Direct3D games are not even complied for the GPU ISA i.e. needs to be JIT recomplied before the GPU executes it.

AMD_IL_APP.jpg

This has nothing to do with the overhead. Seroiusly stop posting random **** you posted a graph that basically says, I won't have to type as much to get direct compute performance using HSA. Its not related to the overhead at all. This has even less to do comparing overhead of directx with console implementations of a graphics API

edit: back to the original question, what does a graph of lines of code compared to performance have to do with overhead?

Each instructions carries it's own overheads i.e. the CPU's instructions has the lowest overheads, thus the best instruction efficiency, which results in the lowest 1st paired bar(non-orange bar).

For "DirectCompute is based on DirectX" statement, refer to http://msdn.microsoft.com/en-us/library/windows/desktop/ff476331(v=vs.85).aspx

"compute shader is a programmable shader stage that expands Microsoft Direct3D 11".

DirectCompute or Compute Shader extends Direct3D. Seriously, stop posting random **** .

Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#643 savagetwinkie
Member since 2008 • 7981 Posts

[QUOTE="savagetwinkie"]

[QUOTE="ronvalencia"]

Direct3D games are not even complied for the GPU ISA i.e. needs to be JIT recomplied before the GPU executes it.

AMD_IL_APP.jpg

ronvalencia

This has nothing to do with the overhead. Seroiusly stop posting random **** you posted a graph that basically says, I won't have to type as much to get direct compute performance using HSA. Its not related to the overhead at all. This has even less to do comparing overhead of directx with console implementations of a graphics API

edit: back to the original question, what does a graph of lines of code compared to performance have to do with overhead?

For "DirectCompute is based on DirectX" statement, refer to http://msdn.microsoft.com/en-us/library/windows/desktop/ff476331(v=vs.85).aspx

"compute shader is a programmable shader stage that expands Microsoft Direct3D 11".

DirectCompute or Compute Shader extends Direct3D. Seriously, stop posting random **** .

You haven't posted any relevant information on overhead accessing hardware, none of the stuff addresses it. Or maybe it does in a real indirect way but you've failed to bring any argument tieing the two concepts together. So who's posting random ****

also lines of code aren't equivilanet to instructions run.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#644 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="ronvalencia"]

[QUOTE="savagetwinkie"] This has nothing to do with the overhead. Seroiusly stop posting random **** you posted a graph that basically says, I won't have to type as much to get direct compute performance using HSA. Its not related to the overhead at all. This has even less to do comparing overhead of directx with console implementations of a graphics API

edit: back to the original question, what does a graph of lines of code compared to performance have to do with overhead?

savagetwinkie

For "DirectCompute is based on DirectX" statement, refer to http://msdn.microsoft.com/en-us/library/windows/desktop/ff476331(v=vs.85).aspx

"compute shader is a programmable shader stage that expands Microsoft Direct3D 11".

DirectCompute or Compute Shader extends Direct3D. Seriously, stop posting random **** .

You haven't posted any relevant information on overhead accessing hardware, none of the stuff addresses it. Or maybe it does in a real indirect way but you've failed to bring any argument tieing the two concepts together. So who's posting random ****

Each instructions carries it's own overheads or cost i.e. the CPU's instructions has the lowest overheads, thus the best instruction efficiency, which results the lowest non-orange bar (not including HSA Bolt). A single CPU's performance potential is also the lowest.

If you look up in any X86 ASM programming manual, each instructions has a cost/overhead associated to it which can expressed as latency cycles.


HSA driver stack attempts to make CPU like instruction efficiencies for the GPUs.


Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#645 savagetwinkie
Member since 2008 • 7981 Posts

[QUOTE="savagetwinkie"]

[QUOTE="ronvalencia"]

For "DirectCompute is based on DirectX" statement, refer to http://msdn.microsoft.com/en-us/library/windows/desktop/ff476331(v=vs.85).aspx

"compute shader is a programmable shader stage that expands Microsoft Direct3D 11".

DirectCompute or Compute Shader extends Direct3D. Seriously, stop posting random **** .

ronvalencia

You haven't posted any relevant information on overhead accessing hardware, none of the stuff addresses it. Or maybe it does in a real indirect way but you've failed to bring any argument tieing the two concepts together. So who's posting random ****

Each instructions carries it's own overheads i.e. the CPU's instructions has the lowest overheads, thus the best instruction efficiency, which results in the lowest 1st paired bar(non-orange bar).

dude that's lines of code, not instruction count, didn't you read the graph you posted? Also your completely wrong, this is probably done in a model environment, . Even if we are counting assembly lines that's not instruction count, a simple for loop could easily expand to a few thousand instructions but take 2 lines up. The graph is very ambiguous and the only real thing it shows is that HSA is likely a lot easier to get performance out of. But its all a relative performance comparison on a PC operating system. Secondly there is more overhead than just CPU instructions with games. The overhead comes from tons of other things. IO being used has it's overhead, the driver has its overhead, the memory has it's overhead, windows has it's overhead. Your comparing a relative performance running an algorithm with different methods. Like calling a driver in windows pushes it into an event loop so it might not even get processed immediately. This is why batch calls are important because the kernel call overhead gets too costly for calling it repeatedly. And API's make it so games can using generic calls to push data down to hardware but in the end the instructions still likely have to be processed and scaled for the hardware. Consoles can use compile time optimizations making it so the CPU doesn't have to do as much work to utilize the hardware. And that's why consoles are considered more efficient and have less overhead. Your missing the key aspect of what consoles are doing better, and that's better GPU utilization and a higher average throughput with similar hardware. None of your posts have addressed this key issue.
Avatar image for Bebi_vegeta
Bebi_vegeta

13558

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#646 Bebi_vegeta
Member since 2003 • 13558 Posts

[QUOTE="04dcarraher"]

Comparing apples to apples

, AMD's Jaguar architecture is the impending successor to the "Bobcat" architecture found in the company's current low-power APUs, and it is not especially beefy. While the idea of an octa-core console sounds dreamy on the surface, the illusion is shattered when you realize that on the PC side of things, Jaguar APUs will be modest processors targeted at tablets, high-end netbooks (ha!), and entry-level laptops.

In other words, the PlayStation 4's CPU performance isn't likely to rock your socks compared to a PC sporting an AMD Piledriver- or Bulldozer-based processor. It might not even trump a lowly Intel Core i3 processor, especially if Eurogamer's early PlayStation 4 leaks continue to prove accurate and those eight cores are clocked at 1.6GHz.

Then there's the GPU. 1.84 teraflops of performance puts the GPU just ahead of the Radeon HD 7850 and well under the Radeon 7870. That also holds true since the PlayStation 4 GPU's 18 compute units sport a build similar to the GCN architecture(ie GCN 2) used to build AMD's Radeon HD 7000-series graphics cards.

The Radeon HD 7850 is nothing to sneeze at. Indeed, if you're looking for a midrange video card, it's a good balanced option. But it's still just a midrange card, not a graphical trail blazerand yet it will form the backbone of the PlayStation 4's gaming chops for years to come.

Overall, if you compare its hardware to what's available in today's PC landscape, the PlayStation 4 is basically powered by a low-end CPU and a midrange GPU. It even packs a mechanical hard drive in an age when many PC gamers have moved on to lightning-quick solid-state drives or even multiple mechanical hard drives.

tormentos

Ok Apple to Apple then..

You are right Jaguar are not ultra powerful CPU,but they don't have to because when you compare Windows which some of you feed 8GB of memory or even more to it,the PS4 will run a very light OS and stream line system design,remember apple to apple so the Jaguar on PS4 will not need to even break a suet on OS it has a chip for that,and when you consider that most games are actually GPU bound and not CPU bound it makes even more sense.

But Apple to apples the jaguar on PS4 doesn't have the need to send the information over a bus to the GPU and then wait for it to get back,on the PS4 the CPU and GPU are on the same die with shared memory.

The 7850 vs 7870 crap.. The PS4 is not well under the 7870 unless you are talking TF crap that mean nothing on consoles,having 2+ TF mean sh** the 7850 has 1.76 and the 7870 beat it by a very small margin in some games no even by 5 frames,yeah that is well under,you consistenly ignore benchmarks,open quotes from well know people from the industry in your endless quest to downplay the PS4.

Is a mid range GPU that will perform better than mid range,and since i unlike you know that optimisation doesn't mean lowring everything (that is a trade off) i know the PS4 will be able to beat the 7870,you can mark this quote 5 years from now the 7870 would be choking on games the PS4 would be running find.

So what advantages does multiple mechanical drive have over the PS4 HDD.? Other than space.? Because SSD at least give a boost in speed,but SSD is very expensive,and considering that you people actually claim PC gaming is not expensive that a few dollar rigs can beat consoles,SSD is a killer blow to the whole argument a 128GB SSD is like $150,with that i buy a 1+TB HDD for my PS4.

SSD will not deliver any advantage other than faster loading,if your PC can keed the GPU well fedd.

You don't know what RAID is ?

Also, SSD have goten cheaper... 160$ is more like a 240GB SSD. Not only does it load faster, but the access time is instant compare to HDD.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#647 ronvalencia
Member since 2008 • 29612 Posts

dude that's lines of code, not instruction count, didn't you read the graph you posted? Also your completely wrong, this is probably done in a model environment, . Even if we are counting assembly lines that's not instruction count, a simple for loop could easily expand to a few thousand instructions but take 2 lines up. The graph is very ambiguous and the only real thing it shows is that HSA is likely a lot easier to get performance out of. But its all a relative performance comparison on a PC operating system. Secondly there is more overhead than just CPU instructions with games. The overhead comes from tons of other things. IO being used has it's overhead, the driver has its overhead, the memory has it's overhead, windows has it's overhead. Your comparing a relative performance running an algorithm with different methods. Like calling a driver in windows pushes it into an event loop so it might not even get processed immediately. This is why batch calls are important because the kernel call overhead gets too costly for calling it repeatedly. And API's make it so games can using generic calls to push data down to hardware but in the end the instructions still likely have to be processed and scaled for the hardware. Consoles can use compile time optimizations making it so the CPU doesn't have to do as much work to utilize the hardware. And that's why consoles are considered more efficient and have less overhead. Your missing the key aspect of what consoles are doing better, and that's better GPU utilization and a higher average throughput with similar hardware. None of your posts have addressed this key issue.

savagetwinkie

On the PC, most of the I/O (NB and main PEG lanes), driver's JIT, Windows OS/DirectX (e.g. event loops/callbacks/DPCs) and 'etc' are being managed by the CPU complex.

You missed compile(dark blue) section which is associated with the current driver stack i.e. JIT recomplier.

"Launch"(green) section would be associated with the data transfers (setup).

On the memory issue, the bar graph shows "copy" (yellow) and "copy back"(red) inefficiencies and it affects both C++ AMP and OpenCL.

From http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/

A program running on the CPU queues work for the GPU using system calls through a device driver stack managed by a completely separate scheduler. This introduces significant dispatch latency, with overhead that makes the process worthwhile only when the application requires a very large amount of parallel computation.

...

The HSA team at AMD analyzed the performance of Haar Face Detect, a commonly used multi-stage video analysis algorithm used to identify faces in a video stream. The team compared a CPU/GPU implementation in OpenCL against an HSA implementation. The HSA version seamlessly shares data between CPU and GPU, without memory copies or cache flushes because it assigns each part of the workload to the most appropriate processor with minimal dispatch overhead.

PS; "Cache flush" can cause CPU slow down i.e. waiting for data reload.


Avatar image for Tessellation
Tessellation

9297

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#648 Tessellation
Member since 2009 • 9297 Posts

PC is and always will be ahead of console toys..only butthurt dweller fanboys would deny it :cool:

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#649 ronvalencia
Member since 2008 • 29612 Posts

[QUOTE="04dcarraher"][QUOTE="tormentos"]

Will see about soon enough,thankfully is not just mean who thinks consoles are more efficient than PC,some much more credible people than both of us do.

savagetwinkie

being more efficient vs sheer processing power. Your efficiency arguments are flawed since all API overhead is all done on cpu's as long the cpu are fast enough you have no overhead to worry about. Now if you where comparing similar low clock cpu's and or much older based dual or quad core based cpu's then yes you would have a point. but the fact is that even with current console based multiplat games only require same era(2005/2006) based dual core cpu's around 3 ghz range to run the games as well as consoles, even with the massive overhead you keep on spouting about.

its not all CPU based inefficiency, the GPU isn't being used as well as it could be because the drivers are more generic, and regardless of how fast CPU's are, the extra overhead creates latency.

With similar GFLOPs, Crysis 2 on Radeon X1950 Pro rivals Xbox 360 's Crysis 2. Note that this PC is not equiped with Intel Atom class CPU design (in-order dual instruction issue per cycle type).

For large computation, the overhead issue is blown out of proportion. The problem is with the small amount computation on the GPU i.e. using the GPU like a CPU.

Avatar image for savagetwinkie
savagetwinkie

7981

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#650 savagetwinkie
Member since 2008 • 7981 Posts

[QUOTE="savagetwinkie"]

dude that's lines of code, not instruction count, didn't you read the graph you posted? Also your completely wrong, this is probably done in a model environment, . Even if we are counting assembly lines that's not instruction count, a simple for loop could easily expand to a few thousand instructions but take 2 lines up. The graph is very ambiguous and the only real thing it shows is that HSA is likely a lot easier to get performance out of. But its all a relative performance comparison on a PC operating system. Secondly there is more overhead than just CPU instructions with games. The overhead comes from tons of other things. IO being used has it's overhead, the driver has its overhead, the memory has it's overhead, windows has it's overhead. Your comparing a relative performance running an algorithm with different methods. Like calling a driver in windows pushes it into an event loop so it might not even get processed immediately. This is why batch calls are important because the kernel call overhead gets too costly for calling it repeatedly. And API's make it so games can using generic calls to push data down to hardware but in the end the instructions still likely have to be processed and scaled for the hardware. Consoles can use compile time optimizations making it so the CPU doesn't have to do as much work to utilize the hardware. And that's why consoles are considered more efficient and have less overhead. Your missing the key aspect of what consoles are doing better, and that's better GPU utilization and a higher average throughput with similar hardware. None of your posts have addressed this key issue.

ronvalencia

You missed compile(dark blue) section which is associated with the current driver stack i.e. JIT recomplier.

"Launch"(green) section would be associated with the data transfers (includes I/O and setup).

On the memory issue, the bar graph shows "copy" (yellow) and "copy back"(red) inefficiencies and it affects both C++ AMP and OpenCL.

From http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/

A program running on the CPU queues work for the GPU using system calls through a device driver stack managed by a completely separate scheduler. This introduces significant dispatch latency, with overhead that makes the process worthwhile only when the application requires a very large amount of parallel computation.

...

The HSA team at AMD analyzed the performance of Haar Face Detect, a commonly used multi-stage video analysis algorithm used to identify faces in a video stream. The team compared a CPU/GPU implementation in OpenCL against an HSA implementation. The HSA version seamlessly shares data between CPU and GPU, without memory copies or cache flushes because it assigns each part of the workload to the most appropriate processor with minimal dispatch overhead.


So a little more reading on HSA, this is a completely different hardware standard targeted towards SoC's and apu's, its eve less relevant than I originally thought since gaming isn't going to detach itself from directx anytime soon. The entire point of DirectX is to abstract specific hardware compatibility out to make games deployable across all PC's