I think games will look better over the course of the generation. Maybe not as much as prior generations--as one poster pointed out the boxes aren't as console(custom) as in the past. I'm no techie, but the "to the metal" phrase still applies to this gen, no? None of the windows bloatware...
...anyways, I think the image quality will be better because of design choices over tech, plus little differences like lighting and transparencies with increases in resolution add a significant amount of fidelity to an image, its just something you realise youve got once you look at what you had.
It was already demonstrated in a real world setting. It seems that the Xbox getting an advantage scares you. Keep dismissing the facts and you will be fine.
Hahahaa. Fucking lol. You think that Halo and Gears (which are already dead), will scare the PS4? Quantum Break will be a flop and be forever in the shadow of Sony exclusives. Face it lem. Sony is gonna crucify Microshaft for exclusives just like they did last gen and the gen before. Rare is dead, Bungie has left, nobody cares about Gears anymore. Halo 4 was meh compared to 3. The best game on Xbox One seems to be Sunset Overdrive. Looks good, but will it be? Microsoft have a lot of pressure on their heads. They need to do something drastic at E3. Like some amazing new IP that can debunk TLOU. I'm talking that good. And guess what, it's not gonna happen.
I guess we will have to see at E3. You speak as if Sony has more lined up. So far The Order is all it has going and that game is a Gears rip off that will bomb. I have the same exact argument but with more announced games.
2) Last gen was fucking weird. The tech was alien to a lot of devs and so it took a while for devs to unlock their full potential. This gen, consoles use the same x86 tech as PC and so, as quite a few devs say, today, we're already looking at games near the full potential of this gen like ISS
Why would you even say that? The legitamacy of that statement is not only false, but is a stupid assumption to make. The margin is minuscule.
And lol what you just said has absolutely nothing to do with what I said that console graphics won't improve dramatically like it did last gen.
Whats funny is that he thinks their talking about their performance overall which is totally false. 2x performance is nothing but the cpu overhead, that's what their talking about is if you had a processor equal to the console's the console would perform better. If consoles were even near 2x faster overall then comparable pc's would would see the PS3 or 360 performing on par with pc's with Geforce 8800GT's which they clearly do not.Or we would not see 8600GT's or even ATI 1950 or other comparable gpu's having on par to better results then those consoles.
The CPU over head is mainly in draw calls. Which are actually 10 to 100 times better with PS4 than Direct X. You are just a blind hermit that thinks he has knowledge of the subject and just makes something totally random of what ever a dev says. You sway it to your advantage. Say that "this is what he actually meant", even though you really haven't a clue. A developer could tell it to your face but you would still manage to think he/she is wrong and you and your reddit pcmasterrace friends are correct. Do you really think that all third party developers are gonna spend their development budget on low level coding for one console? Of course not. So yes, you do see a lot of multiplats looking similar on an equivelent PC for this reason. Exclusives on consoles (which can not be compared to PC obviously), utilize the console fully and allows for 2x performance if coded properly. You can bring a super computer down with a dodgy line of code. It's all about the software, not just the brute force of hardware that PC has. PS3 multiplats do not perform like an 8800GT which is about twice the power of a PS3 gpu, but it's exclusives like TLOU definatly look 8800GT like. The amount of optimization and low level algorithms with that game was off the charts.
Look, I get that you are a proud PC 'nobleman', and you don't like to be questioned in any way, but just accept that consoles allow for, (not always give) 2x performance. This will not be seen in most multiplats, but through exclusives. It's hard to talk over a forum because there is too much nitpicking and discrimination, but I hope you understand where I'm coming from at least.
Your so full of yourself , and you clearly have no idea.... PS3 with TLOU looking and acting like it was on a 8800GT, lol wishful thinking. All the "2x performance claims" are for cpu usage not gpu usage. PC cpu's have been ahead of the game since 2006. making up the slack of having to process those draw calls through direct x. Fact is that these new consoles are in worse position on the cpu front and gpu front then their predecessors were in 2005/2006.
2) Last gen was fucking weird. The tech was alien to a lot of devs and so it took a while for devs to unlock their full potential. This gen, consoles use the same x86 tech as PC and so, as quite a few devs say, today, we're already looking at games near the full potential of this gen like ISS
Why would you even say that? The legitamacy of that statement is not only false, but is a stupid assumption to make. The margin is minuscule.
And lol what you just said has absolutely nothing to do with what I said that console graphics won't improve dramatically like it did last gen.
Whats funny is that he thinks their talking about their performance overall which is totally false. 2x performance is nothing but the cpu overhead, that's what their talking about is if you had a processor equal to the console's the console would perform better. If consoles were even near 2x faster overall then comparable pc's would would see the PS3 or 360 performing on par with pc's with Geforce 8800GT's which they clearly do not.Or we would not see 8600GT's or even ATI 1950 or other comparable gpu's having on par to better results then those consoles.
The CPU over head is mainly in draw calls. Which are actually 10 to 100 times better with PS4 than Direct X. You are just a blind hermit that thinks he has knowledge of the subject and just makes something totally random of what ever a dev says. You sway it to your advantage. Say that "this is what he actually meant", even though you really haven't a clue. A developer could tell it to your face but you would still manage to think he/she is wrong and you and your reddit pcmasterrace friends are correct. Do you really think that all third party developers are gonna spend their development budget on low level coding for one console? Of course not. So yes, you do see a lot of multiplats looking similar on an equivelent PC for this reason. Exclusives on consoles (which can not be compared to PC obviously), utilize the console fully and allows for 2x performance if coded properly. You can bring a super computer down with a dodgy line of code. It's all about the software, not just the brute force of hardware that PC has. PS3 multiplats do not perform like an 8800GT which is about twice the power of a PS3 gpu, but it's exclusives like TLOU definatly look 8800GT like. The amount of optimization and low level algorithms with that game was off the charts.
Look, I get that you are a proud PC 'nobleman', and you don't like to be questioned in any way, but just accept that consoles allow for, (not always give) 2x performance. This will not be seen in most multiplats, but through exclusives. It's hard to talk over a forum because there is too much nitpicking and discrimination, but I hope you understand where I'm coming from at least.
The 8800gt coupled with a core 2 duo can run Crysis 1 with high settings (second highest) at 1440x900 at 42fps. And Crysis 1 with those settings >>> any ps3 game.
Your so full of yourself , and you clearly have no idea.... PS3 with TLOU looking and acting like it was on a 8800GT, lol wishful thinking. All the "2x performance claims" are for cpu usage not gpu usage.
Proof?
Common sense..... you should go look for it. =P
All overheads are handled by the cpu, the cpu is what controls the system. having low level API removes the high level software based workloads needed to be processed by the cpu. This in itself can give massive gains in performance from the cpu side not having to process the extra work speeding things up in general on all fronts allowing more of the cpu cycles to go to other jobs. Which is why the 360/PS3 out did Pc's are their releases because the majority of Pc's had single core based processors. And those games that were on consoles too also only used one thread, and they usually ended up running much better overall compared to cpu's that were roughly equal clock per clock. Now when dual core cpu's started become the norm the engines started to make use of the two threads and because with architecture and performance gains from these new cpu's they could out process the consoles with ease and brute force through direct x overhead. Allowing comparable gpu's to perform as well or better then the console could.
This is the same reason why with intel cpu's see little to no gains going from direct x 11 to mantle because they have the processing power to brute force through direct x's overhead to begin with. while AMD cpu's being much slower clock per clock, by removing the unneeded workloads caused by direct x improved the cpu's ability to feed the gpu and handle all the other data it needs to be processed much better, improving performance overall as much as 100% difference. Now with these new consoles, their processors are really weak, developers from all over have stated this and that they *have* to utilize all six threads to get where they want to be. Two of the cores/threads are allocated by the OS and features, which should tell you how weak these processors are and how more complex these OS's are too. These processor's architecture and being 1.6-1.7ghz is also a blow to what they can do too. which is why you see framerate issues with BF4 64 MP. Games like ISS its frame rate drops can be both from the gpu side and cpu , when you turn very quickly to having hectic scenes when there's a lot of stuff happening on screen.
The Jaguar architecture itself is only around 15% faster then AMD's Bobcat series which is slower then the old Athlon X2's from pre 2006 clock per clock. while Jaguar is slightly faster its nowhere near intel's processing power from their Penryn/Core based processors let alone AMD's K10 based cpu's. Needless to say even with low level API those jaguar's with six cores wont perform any better then an Athlon 2 X4 at 2.6 ghz.
All overheads are handled by the cpu, the cpu is what controls the system. having low level API removes the high level software based workloads needed to be processed by the cpu. This in itself can give massive gains in performance from the cpu side not having to process the extra work speeding things up in general on all fronts allowing more of the cpu cycles to go to other jobs. Which is why the 360/PS3 out did Pc's are their releases because the majority of Pc's had single core based processors. And those games that were on consoles too also only used one thread, and they usually ended up running much better overall compared to cpu's that were roughly equal clock per clock. Now when dual core cpu's started become the norm the engines started to make use of the two threads and because with architecture and performance gains from these new cpu's they could out process the consoles with ease and brute force through direct x overhead. Allowing comparable gpu's to perform as well or better then the console could.
This is the same reason why with intel cpu's see little to no gains going from direct x 11 to mantle because they have the processing power to brute force through direct x's overhead to begin with. while AMD cpu's being much slower clock per clock, by removing the unneeded workloads caused by direct x improved the cpu's ability to feed the gpu and handle all the other data it needs to be processed much better, improving performance overall as much as 100% difference. Now with these new consoles, their processors are really weak, developers from all over have stated this and that they *have* to utilize all six threads to get where they want to be. Two of the cores/threads are allocated by the OS and features, which should tell you how weak these processors are and how more complex these OS's are too. These processor's architecture and being 1.6-1.7ghz is also a blow to what they can do too. which is why you see framerate issues with BF4 64 MP. Games like ISS its frame rate drops can be both from the gpu side and cpu , when you turn very quickly to having hectic scenes when there's a lot of stuff happening on screen.
The Jaguar architecture itself is only around 15% faster then AMD's Bobcat series which is slower then the old Athlon X2's from pre 2006 clock per clock. while Jaguar is slightly faster its nowhere near intel's processing power from their Penryn/Core based processors let alone AMD's K10 based cpu's. Needless to say even with low level API those jaguar's with six cores wont perform any better then an Athlon 2 X4 at 2.6 ghz.
Your first paragraph is pretty much 100% correct. With brute force, the overheads began to matter less and less. All I'm going to say is, what is the point of PS4 low level API or Mantle? Aspecially mantle on high end PC's. lol. Are they there for shits and giggs? Or was mantle simply created so that low end CPU's could perform better?
So you are basically saying that the PS4 has no advantage over a PC of the same power? And never will? just to clarify.
Mantle was created to help AMD's cpu's perform better by cutting the workload that direct x created saving cpu cycles for other tasks with their gpu's. Low level API is a given for consoles but the principal is the same. Mantle does not help intel based cpu's because of their processing abilities. PS4 has no real advantage over a pc with a stronger cpu and equal gpu with direct x , Its very hard in this day and age not to have a stronger cpu then whats in the X1 or PS4. Either it being an AMD high clocked quad core to hexacore to their eight cored cpu's or intel quad cores. With Mantle based or future direct x 12 based games you can have a nearly comparable cpu with on par gpu and get comparable results.
when it comes to system wars, i've observed a kind of a polygon paradox. i was thinking about this last night.
really, you want a mesh with clean topology eg: less polys, you don't want unnecessary topology because you have to render that unnecessary topology and that will take 'work'. looking through 'showreel' equivalents of 3D modellers, many of them 'flex' their ability to do things like take 5M poly sculpts and drop them down to 10K poly characters ready for in-game deployment.
but i've seen people argue polygon count as a metric for graphical goodness. well, does it make sense to argue polygon count when they'll want to keep topology as simple as possible without compromising their vision for the visuals? i'm not sure it does. it's kind of circular because a high poly count mesh might actually be a very poor mesh, and a low poly count mesh could look better.
obviously a model can be highly detailed yet also have an efficient, clean topology. a scene with many "good" yet low poly count models will render much better than a scene with few "poor" yet high poly count models.
anyway. @_Matt_ i consider you an expert in the field. feel free to eviscerate my argument if I'm completely wrong, but it seems to me that poly count isn't a very useful metric when it comes to assessing visuals or system power?
You are smack on the money there CKA, though I'm still far from an expert in this area, there's always much to learn!
The poly count is really only half of it. In fact, the polys are only really needed for the silhouette of most models, the texture maps are easily the most important part of getting assets looking good. When I was doing my final 3D module at uni, it was pushed on me time and again for complex models that modelling is about 1/3 of the work needed to produce a good model; nearly all of the rest is in the texturing.
As an example, here is a wheel from a model I did a while ago:
On the left wheel there are 4700 polys, and on the right there are only 1500 polys, yet they look almost identical (or they would if I did a proper job of it, this was a rush job, but you get the idea). This is achieved with texture maps (normal and ambient occlusion),
If I had spent more time on it, the wheels would have looked closer together, and at 1/3 of the polys, it's all about the efficiency.
So yes, polys need to be clean and efficient, laid out with good topology as you say, but most of the detail comes from texture maps, especially where hardware is restricted, such as in console games. I apologise for the wall of text.
when it comes to system wars, i've observed a kind of a polygon paradox. i was thinking about this last night.
really, you want a mesh with clean topology eg: less polys, you don't want unnecessary topology because you have to render that unnecessary topology and that will take 'work'. looking through 'showreel' equivalents of 3D modellers, many of them 'flex' their ability to do things like take 5M poly sculpts and drop them down to 10K poly characters ready for in-game deployment.
but i've seen people argue polygon count as a metric for graphical goodness. well, does it make sense to argue polygon count when they'll want to keep topology as simple as possible without compromising their vision for the visuals? i'm not sure it does. it's kind of circular because a high poly count mesh might actually be a very poor mesh, and a low poly count mesh could look better.
obviously a model can be highly detailed yet also have an efficient, clean topology. a scene with many "good" yet low poly count models will render much better than a scene with few "poor" yet high poly count models.
anyway. @_Matt_ i consider you an expert in the field. feel free to eviscerate my argument if I'm completely wrong, but it seems to me that poly count isn't a very useful metric when it comes to assessing visuals or system power?
You are smack on the money there CKA, though I'm still far from an expert in this area, there's always much to learn!
The poly count is really only half of it. In fact, the polys are only really needed for the silhouette of most models, the texture maps are easily the most important part of getting assets looking good. When I was doing my final 3D module at uni, it was pushed on me time and again for complex models that modelling is about 1/3 of the work needed to produce a good model; nearly all of the rest is in the texturing.
As an example, here is a wheel from a model I did a while ago:
On the left wheel there are 4700 polys, and on the right there are only 1500 polys, yet they look almost identical (or they would if I did a proper job of it, this was a rush job, but you get the idea). This is achieved with texture maps (normal and ambient occlusion),
If I had spent more time on it, the wheels would have looked closer together, and at 1/3 of the polys, it's all about the efficiency.
So yes, polys need to be clean and efficient, laid out with good topology as you say, but most of the detail comes from texture maps, especially where hardware is restricted, such as in console games. I apologise for the wall of text.
@_Matt_: no need to apologise for all the words/letters man, i appreciate you taking the time to type dat shizzle. and especially thanks for showing a real life example. the air fill connector is a nice touch.
thanks for confirming that my thinking's on the right track, cheers. and just to confirm, texture mapping is (crudely) adding stuff like the skin, specularity, bump mapping - stuff like that?
@FoxbatAlpha: Can't say I did. I went to a University in Northern England.
@CrownKingArthur said:
@_Matt_: no need to apologise for all the words/letters man, i appreciate you taking the time to type dat shizzle. and especially thanks for showing a real life example. the air fill connector is a nice touch.
thanks for confirming that my thinking's on the right track, cheers. and just to confirm, texture mapping is (crudely) adding stuff like the skin, specularity, bump mapping - stuff like that?
yeah that's exactly it, on a very basic level: diffuse maps (the colour) specular and gloss maps (shininess and reflectivity), normal maps (fine detail bumps), opacity maps (transparency), displacement maps (physical surface bumps), ambient occlusion maps (natural shadows) all (or whichever applicable) work together to form the material and detail appearence of a model.
Maybe once cross-gen games stop being released we will see a marginal increase but I wouldn't get my hopes up. We're never going to see another huge leap in graphical fedelity like we did going from 2D to 3D or SD to HD and I think that's what ignorant people were expecting with this gen.
There was a huge jump in graphics and visuals from SD to HD. It was the resolution of TVs changing that made the huge difference & not because new consoles came out. SD TV shows look much worse than HD TV shows.
@FoxbatAlpha: lol Full Sail. You probably didn't mean it, but that could be taken as an insult these days.
@_Matt_: What are you baking in? Looks like Max.
I usually work on film stuff, but I've had a lot of real time projects on the plate recently and I'm finding Xnormal is GODLY for baking. It handles extremely dense meshes with no need for decimation, I've brought in subtools in excess of 20 million tris. Very fast too, AO takes a bit as does curvature if you're setting up for SSS, but those take long just about anywhere. Plenty of other neat features too. For instance you could bake out a cavity map, but you can also import your tangent space normal map and convert it to cavity right there for better results faster.
It's free and you can learn it in a day, master it in a week. I only say all this because I remember how dreadful Max can be in this regard. Xnormal delivers better results faster. I moved on to a Modo/Zbrush/Xnormal set up and haven't looked back lol.
I usually work on film stuff, but I've had a lot of real time projects on the plate recently and I'm finding Xnormal is GODLY for baking. It handles extremely dense meshes with no need for decimation, I've brought in subtools in excess of 20 million tris. Very fast too, AO takes a bit as does curvature if you're setting up for SSS, but those take long just about anywhere. Plenty of other neat features too. For instance you could bake out a cavity map, but you can also import your tangent space normal map and convert it to cavity right there for better results faster.
It's free and you can learn it in a day, master it in a week. I only say all this because I remember how dreadful Max can be in this regard. Xnormal delivers better results faster. I moved on to a Modo/Zbrush/Xnormal set up and haven't looked back lol.
Nah it was a quick dirty bake in Mudbox. I prefer it over 3DS Max baking as it usually gets better results quicker, and can combine it with sculpting.
Yeah I need to properly learn Xnormal, been meaning to for a while but just not had chance to learn it between finishing uni, and teaching myself Unreal 4. I will be teaching myself ZBrush to use instead of Mudbox for sculpting soon too.
Nah it was a quick dirty bake in Mudbox. I prefer it over 3DS Max baking as it usually gets better results quicker, and can combine it with sculpting.
Yeah I need to properly learn Xnormal, been meaning to for a while but just not had chance to learn it between finishing uni, and teaching myself Unreal 4. I will be teaching myself ZBrush to use instead of Mudbox for sculpting soon too.
Ahh, Mudbox. Love it for texture painting, never tried baking normals or AO there though.
This video and it's 2nd part are all you need to get started in Xnormal. 20 minutes and you'll be ready to go, the rest is just trial and error. Not your typical interface, but it's actually very simple.
I wish I could be in UDK right now, but I'm currently on a Unity project and the shader system is ass unless you're a programmer.
And omg yes, Zbrush for sculpting all the way! It's easily my favorite piece of software whether designing, detailing, or modeling organics. The interface and navigation are so ass backwards you would think it was designed by aliens. Once you get past that though, it's easily the best sculpting software on the market.
Ahh, Mudbox. Love it for texture painting, never tried baking normals or AO there though.
This video and it's 2nd part are all you need to get started in Xnormal. 20 minutes and you'll be ready to go, the rest is just trial and error. Not your typical interface, but it's actually very simple.
I wish I could be in UDK right now, but I'm currently on a Unity project and the shader system is ass unless you're a programmer.
And omg yes, Zbrush for sculpting all the way! It's easily my favorite piece of software whether designing, detailing, or modeling organics. The interface and navigation are so ass backwards you would think it was designed by aliens. Once you get past that though, it's easily the best sculpting software on the market.
Edit: Oops, meant this video.
Thanks for the linky. Shall check that out.
I love UDK, but it is flawed, UE4 seems to improve it in every way. I have friends who used Unity this year, and they did nothing but complain how you could only plug in 2 texture maps by default, so they had to choose between specular or normal for everything. Pretty flawed if that's true and they weren't just lazy.
The UI is wat has put me off ZBrush for so long, but I realise it is far more powerful than Mudbox, so will get to it.
You are smack on the money there CKA, though I'm still far from an expert in this area, there's always much to learn!
The poly count is really only half of it. In fact, the polys are only really needed for the silhouette of most models, the texture maps are easily the most important part of getting assets looking good. When I was doing my final 3D module at uni, it was pushed on me time and again for complex models that modelling is about 1/3 of the work needed to produce a good model; nearly all of the rest is in the texturing.
As an example, here is a wheel from a model I did a while ago:
On the left wheel there are 4700 polys, and on the right there are only 1500 polys, yet they look almost identical (or they would if I did a proper job of it, this was a rush job, but you get the idea). This is achieved with texture maps (normal and ambient occlusion),
If I had spent more time on it, the wheels would have looked closer together, and at 1/3 of the polys, it's all about the efficiency.
So yes, polys need to be clean and efficient, laid out with good topology as you say, but most of the detail comes from texture maps, especially where hardware is restricted, such as in console games. I apologise for the wall of text.
Yeah I know exactly what your saying. Focusing on poly count alone is really silly lol. However, in this 8th generation of consoles (and PC games too as consoles effect PC gaming ports), you will see a significant increase in geometry. Obviously that alone does not equal a better image to the end user. Texture maps are extremely important. I used Crysis 3 on PC and 360 as an example.
These models are pretty much exactly the same mesh on both versions. But, as you can see, the PC version looks much much better because, as you say, the texture maps are what make the immediate difference. There is absolutely no doubt about that. What I am saying however, is lets say you have a cup on the table in a 7th gen game. It is composed of 100 polygons. It is likely that small details like this will be increased in 8th gen titles just as it has throughout every generation. Essentially, an 8th gen PC game on low settings will be vastly better than a 7th gen PC game at low settings, because of geometric detail. Of course, if like Crysis 3, the character models are sufficient enough to look amazing when good texture maps and skin maps are applied, then for this reason, the average person will not percieve the difference from a 7th gen game to an 8th gen game in terms of geometric detail. But, i guess the main question is. Will Crysis 4, given that is made for next gen and not last gen, look much better than Crysis 3, if the character models, animations, AI, and geometric detail of levels etc are vastly improved, but the textures and lighting remain about the same?
Of course. I appreciate I went a little off topic with what I said and didn't really relate to the topic subject in hand.
I agree that poly count will naturally increase this gen, and it is a nice jump in geometric detail over gen 7. I think the jump in physical poly increases though will get smaller as the texture details get more and more attention though. I think that is exactly what you were saying too, but yeah, gen 7 to 8 poly count wise is a much smaller jump than gen 6 to 7... not because of the power jump differences (as some will say), but more simply because of much higher res textures :) .
Good example with Crysis 3 though, they will likely have identical poly counts to save time, just the nicer res textures as you say.
If the textures and lighting remain the same in Crysis 4 over 3, but everything else were improved as you asked, I don't think it will look visually much better. Sure to the scutinising eye I'm sure people will see much nicer curves on silhoettes of models, but if the lighting and textures don't improve at least as much as the geometry, then it will be hardly a difference at all.
Mantle was created to help AMD's cpu's perform better by cutting the workload that direct x created saving cpu cycles for other tasks with their gpu's. Low level API is a given for consoles but the principal is the same. Mantle does not help intel based cpu's because of their processing abilities. PS4 has no real advantage over a pc with a stronger cpu and equal gpu with direct x , Its very hard in this day and age not to have a stronger cpu then whats in the X1 or PS4. Either it being an AMD high clocked quad core to hexacore to their eight cored cpu's or intel quad cores. With Mantle based or future direct x 12 based games you can have a nearly comparable cpu with on par gpu and get comparable results.
Wow, this is hard for me to wrap my head around. For all this time, devs have been incorrect. These people who have actually worked with these machines are just wrong. Idiot developers! To think, the only difference between consoles and PC's in terms of optimizing is the CPU overhead. Wow. All of that silly 'custom acces to the gpu' garbage is just PR bullshit. Doesn't do a thing for performance. Or if it does 'it's minuscule'. And if that PC has an i5 or an i7, with the same powered GPU as PS4, it will be just as capable, if not more so than the PS4 through out it's whole life time using just DirectX11 (even though that top end i7 is still getting 10 to 100 times less draw calls per second than PS4 given that GNM is used). I.. I don't know what to say. I'm speechless.
For you to make such claims, you obviously have had experience in both fields of development and can recoginse the difference. If I may, what development fields do you work with? HLSL? C++? Or x86 assembly? Do you do engine programming? If not, then where have you gathered your knowledge of the benefits (if there is any according to you) of low level API's?
I'm curious, since these low level API's only help weak CPU's, what if the PS4 used an i5 or an i7? Would they have just stuck with a high level API? Because there will be no use for a low level API if the CPU is 'powerful enough to overcome the overheads'.
We have seen the same effect with this last generation seeing Geforce 8600's ATI 1950's etc performing as well or better then the 360 in games having a good dual core cpu's which were multiple times faster then its triple core cpu. And the PS3 PPE was counted as the processor"cpu" excluding the SPE's which were designed for parallel and graphic workloads
As you can see when the last gen came out in 2005 the 360 was as fast or faster then the first generation of dual cores from 2005 early 2006 aka socket 939 AMD chips. When intel released their C2D's and C2Q's and AMD's AM2 based Athlon X2's and Phenom series it was game over.
If the PS4 or X1 had an AMD FX 8 running at 3 ghz then would see massive difference vs those jaguar's, heck even if those jags were at 3ghz would help alot.. These new console came out the door underpowered and out gunned from the start. There performance per clock is like 35% slower then AMD's Athlon 2's then we have those Athlon's operating another 50-100 or more percent higher clockrates increasing the differences. Simple way to look at it , would be to say each available core for PS4 rates 100, So 6 cores x 100= 600 score for games. while say an Athlon 2 X4 650 at 3.2 ghz rates 135 at 1.6 ghz vs jag at 1.6 ghz with 100. With double the clockrate you can rate that Athlon at 270, so 4 cores x 270 = 1080 your talking about a 55% difference total processing performance even with those 6 cores. And this isnt comparing Phenom 2's FX series or even intel's 1-4th icore generation which are all faster then AMD's per clock.
Ive been working on computers since the late 90's and have had my fingers in C++, and graphic design and few other things over the years. But it dont take a rocket scientist to see what these consoles are and what their able to do. And with them with each generation using more and more pc hardware and software standards you can compare them for what they are especially this gen using AMD based tech.
While developers have more control over the hardware they cant get by their physical limitations. There will be improvements as time goes on but it will happen sooner then later and their max peaks will be reached definitely within their first couple of years. its not like their waiting for the software to catch up to the hardware with the 360.
Them throwing in a cpu as fast as an i5 or i7 and purposely using high level API would be dumb waste of resources lol direct x is there for compatibly purposes and why there is more overhead . But as more and more powerful gpu's come out Mantle/DX12 will help those i5/i7's save some cpu cycles to squeeze abit more performance.
Here is an example of a game being limited by AMD's slower processing power having to mess with directx and not having all cpu cores working like they should and when Mantle shows up
The bolded contradicts what you say about Mantle and other low level API's giving performance boosts only to low powered CPU's. You specifically said "PS4 has no real advantage over a pc with a stronger cpu and equal gpu with direct x". If a low level API doesn't make a difference to an i5 or i7, then what's the point of having a LL API complicating things? I also made a claim that a GPU running through DirectX is used no where near to it's max potential, you then ridiculed this statment and brushed it aside. You do realize that one of the chief complaints devs have with direct x is all the power that is lost through lack of control? This has been confirmed many times.
Here is the facts. A PC with lets say, a quad core 3.2 GHz phenom ii x4, with 8GB of DDR3 ram, and a 1.8 TFLOP AMD GPU what ever that might be, running through DirectX11, could only perform roughly half as good as a PS4 potentially could through GNM fully utilizing the hardware with low level algorithms. Yeah that gets to you doesn't it. You are pre programmed to call bullshit on that one aren't you? explain why I am incorrect.
Your not understanding, low level API saves resources from the cpu its common sense to use a low level API in a closed system where you only have one set of hardware and software to run. Direct x layers are their for compatibility purposes making sure that this set of hardware and work with that set of hardware with multiple and infinite types of configurations. if you dont have enough free cpu cycles for other tasks it lowers the cpu's ability to process the other things it needs too lowering performance for everything else. Your claim of direct x not being able to use most of the gpu or that other one claiming only seeing half is BS , because direct x is handled by the cpu. Yes you dont have full control over every piece of hardware through Direct x but you dont lose half or whatever your thinking. Because if that was the case of direct x limiting gpu ability you wouldn't see a computer Dual core with 8800 performing 2-3x faster with 2-3x the resolution of what the consoles were running with games.
These new console came out the door underpowered and out gunned from the start. Their performance per clock is like 35% slower then AMD's Athlon 2's then we have those Athlon's operating another 50-100 or more percent higher clockrates increasing the differences. Simple way to look at it , would be to say each available core for PS4 rates 100, So 6 cores x 100= 600 score for games. while say an Athlon 2 X4 650 at 3.2 ghz rates 135 at 1.6 ghz vs jag at 1.6 ghz with 100. With double the clockrate you can rate that Athlon at 270, so 4 cores x 270 = 1080 your talking about a 55% difference total processing performance even with those 6 cores. And this isnt comparing Phenom 2's FX series or even intel's 1-4th icore generation which are all faster then AMD's per clock.
So no you wouldn't see a phenom 2 at 3.2 ghz only performing half as good as a PS4..... You would see that Phenom perform as well or better then that PS4.
What you seem not to understand is that the 2x performance claims with consoles vs equal pc hardware is all the aspects of optimizations which includes lower overhead on cpu's, tweaking every aspect of the engines and assets to fit that specific hardware. You cant just say just because one cpu is half as slow as another cpu can out process the other faster cpu just because of the lower overhead. Most developers give users the ability to changes settings on pc to fit their hardware too and since most dedicated pc gamers have same standards in hardware as these consoles they will be doing alot more sharing in coding and assets with games.
Log in to comment