Dat SecretSauce makes for a SW.... spicy-a-meat-a-ball.
The insecurity is strong in this thread. :P
@tormentos : For the record I have an never will have duplicate accounts. Seriously whats the point. Is it too hard to accept there are several people that simply dont agree with you or think you are a simple fanboy that prefers to write in these forums than actually play games.
Same month,same year,same low ass count post.
Same arguments about secret sauce.
Lemming both.
Both faceless alts.
Even the names...
Hahaha.........
So i prefer to post here than to play games.? Do you have an xbox live account.?
Oh yeah you do...
http://www.xboxlivescore.com/profile/TDKMillsy
Hahaha now say it isn't you...hahahaaaaaaaaaaaa
Is that a testament to a fakeboy.? in before you claim to have another account..hahahaa
Google is your friend.
lmao a podcast designed to keep lemming peckers up.
Da cloud indeed.
That podcast was great of Phil to do. He even gave those guys some exclusive news! Just saying it is great to have Phil on our side. Maybe he will come on P2M????? @Heil68, @TheEroica,@Animal-Mother.
We should ask Phil!
That podcast was great of Phil to do. He even gave those guys some exclusive news! Just saying it is great to have Phil on our side. Maybe he will come on P2M????? @Heil68, @TheEroica,@Animal-Mother.
Yeah, he announced some more classic Rare IPs are going to be butchered. Thank you based Phil.
@tormentos : For the record I have an never will have duplicate accounts. Seriously whats the point. Is it too hard to accept there are several people that simply dont agree with you or think you are a simple fanboy that prefers to write in these forums than actually play games.
Same month,same year,same low ass count post.
Same arguments about secret sauce.
Lemming both.
Both faceless alts.
Even the names...
Hahaha.........
So i prefer to post here than to play games.? Do you have an xbox live account.?
Oh yeah you do...
http://www.xboxlivescore.com/profile/TDKMillsy
Hahaha now say it isn't you...hahahaaaaaaaaaaaa
Is that a testament to a fakeboy.? in before you claim to have another account..hahahaa
Thanks for posting my Xbox Profile all over a forum. Really appreciate that. I have a PSN account and a load of other accounts PC. What really is your point.
You sad sad Kid. The desperation to prove I'm the same person as ttboy. Why would anyone run two profiles on a forums, what would be the point. If you where being a dick and got banned maybe. But otherwise why bother. Ask yourself why would I sign up as ttboy and then sign up as tdkmillsy, especially when I didn't post anything in these forums (or very little cant remember that far back) until recent years.
If anyone is faking here its you. Faking you actually have a PS4 and faking you actually come on this forum to talk about games when in fact your just looking to big yourself up and desperately trying to prove you are right.
MCC is not on any other console
Halo3,Halo 4,Halo 1 are all on xbox 360.
Halo 1 is also on xbox,Halo 2 is also on xbox.
So yeah all the games are on multiple platforms,joining them all together in 1 package doesn't make the xbox 360 version disappear.
Thanks for posting my Xbox Profile all over a forum. Really appreciate that. I have a PSN account and a load of other accounts PC. What really is your point.
You sad sad Kid. The desperation to prove I'm the same person as ttboy. Why would anyone run two profiles on a forums, what would be the point. If you where being a dick and got banned maybe. But otherwise why bother. Ask yourself why would I sign up as ttboy and then sign up as tdkmillsy, especially when I didn't post anything in these forums (or very little cant remember that far back) until recent years.
If anyone is faking here its you. Faking you actually have a PS4 and faking you actually come on this forum to talk about games when in fact your just looking to big yourself up and desperately trying to prove you are right.
Hahahaaaaaaaaaaa Fakeboy...lol
Of course the 360 version doesn't disappear. But the Master Chief collection is only available on Xbox one. It is going to be epic.
I expect huge amounts of damage control to start appearing within the next couple of weeks as marketing and information is ramped up.
It doesn't matter all the content of the collection can be play now else where,so you don't need and xbox one to play any of those,unless you refer to Halo 2 online,which is not wort 60 dollars,worse is the only from the pack that is not on xbox 360 yet it is not 1080p 60FPS.
Is not going to be epic,it just a bunch of Halo games already on other consoles group together,the more you hype this shit the more TLOU make a dent on the xbox one library,you can't have it both ways and TLOU has 95% on meta that is higher than any xbox one game so far and is a new fresh series.
@tormentos:
Bunches of new content added, face it, MCC is a stand alone mile stone. Just get an X1, you know you want One. :P
Of course the 360 version doesn't disappear. But the Master Chief collection is only available on Xbox one. It is going to be epic.
I expect huge amounts of damage control to start appearing within the next couple of weeks as marketing and information is ramped up.
It doesn't matter all the content of the collection can be play now else where,so you don't need and xbox one to play any of those,unless you refer to Halo 2 online,which is not wort 60 dollars,worse is the only from the pack that is not on xbox 360 yet it is not 1080p 60FPS.
Is not going to be epic,it just a bunch of Halo games already on other consoles group together,the more you hype this shit the more TLOU make a dent on the xbox one library,you can't have it both ways and TLOU has 95% on meta that is higher than any xbox one game so far and is a new fresh series.
You can't claim TLOU is a PS4 system seller, then say the MCC won't be.
@tormentos : Have you looked at Brad Wardell's resume. I think he knows more about Direct X than anyone on here lol. If he says that DX 11 is still single CPU bound then its from his hands on experience.
In the DX12 presentation by the DX12 Dev lead he shows you that DX11 is still core 0 bound. One could argue that this is the main reason for the massive changes in DX12. We still haven't heard about what additional effects will be introduced. All we know is that the multiple CPU usage is a game changer in DX12 compared to DX11.
umm, no shock here. The console can't even run dx9,10,11 properly at 1080p. I doubt it would handle dx12 on top of that. Also why would the xbox one support dx12 anyways, the hardware was dated to begin with a year ago... and PC video cards are just now supporting dx12.
GCN supports DX12 so the Xbox One will support DX12.
As others have said it will improve CPU overhead which might lead to better GPU utilisation in certain scenarios if the GPU is a bit under utilised. The PS4 already has better CPU overhead as stated by the Metro devs here.
Oles Shishkovstov: Let's put it that way - we have seen scenarios where a single CPU core was fully loaded just by issuing draw-calls on Xbox One (and that's surely on the 'mono' driver with several fast-path calls utilised). Then, the same scenario on PS4, it was actually difficult to find those draw-calls in the profile graphs, because they are using almost no time and are barely visible as a result.
In general - I don't really get why they choose DX11 as a starting point for the console. It's a console! Why care about some legacy stuff at all? On PS4, most GPU commands are just a few DWORDs written into the command buffer, let's say just a few CPU clock cycles. On Xbox One it easily could be one million times slower because of all the bookkeeping the API does.
But Microsoft is not sleeping, really. Each XDK that has been released both before and after the Xbox One launch has brought faster and faster draw-calls to the table. They added tons of features just to work around limitations of the DX11 API model. They even made a DX12/GNM style do-it-yourself API available - although we didn't ship with it on Redux due to time constraints.
Nothing here is that new to be honest and has been stated many times over by several different posters. DX12 will not improve Xbox One hardware, it will improve utilisation of the hardware but that is already behind PS4 hardware utilisation. The 900p vs 1080p gap will generally remain in the majority of multiplatform titles for the rest of the generation.
You can't claim TLOU is a PS4 system seller, then say the MCC won't be.
Quote me saying TLOU is a PS4 system seller.
I never claim that i am telling sts106mats that he should drop the whole MCC crap,is a game which consist on multiple version of a game which is on xbox 360 almost complete.
@tormentos:
Bunches of new content added, face it, MCC is a stand alone mile stone. Just get an X1, you know you want One. :P
Oh is the same crap dude.
And if i wanted really an xbox one it would actually be 100 times more for sunset over drive than for Halo,actually i have no problem saying that,Insomniac make fun games and SO should be a fun one.
Who says i want it both ways? I loved TLOU, WTF?
The MCC is going to be epic, the option to play any amount of levels from any of the games, in any order - co-op/ matchmaking for every halo map ever, including some completely re-imagined with interactive environments? come on haha, it will be quality and will put every other remaster to fucking shame.
A lot of people on here talking about picking up Xbox ones, when it releases. Make no mistake the collection will be a big hit with fans and newcomers alike.
meanwhile on PS4, it's ballsack boy
That is because you didn't have a PS3 like me.
That bold part is not true,ODST and Halo reach are not there.
Yet sackboy is exclusive for now to the PS4 the PS3 version has no release date.
http://www.gamestop.com/browse?nav=16k-3-little+big+planet+3,28zu0
Oh did i forget to mention that LBP is a highly rated series on both its chapters.?
LBP has 95% on Meta and LBP2 has 91,so yeah is a highly rated series popular on PS.
Yeah because gamespot represent 99% of the xbox one user base,getting and XBO for MCC is like getting a PS4 for TLOU.
@tormentos : Have you looked at Brad Wardell's resume. I think he knows more about Direct X than anyone on here lol. If he says that DX 11 is still single CPU bound then its from his hands on experience.
In the DX12 presentation by the DX12 Dev lead he shows you that DX11 is still core 0 bound. One could argue that this is the main reason for the massive changes in DX12. We still haven't heard about what additional effects will be introduced. All we know is that the multiple CPU usage is a game changer in DX12 compared to DX11.
Have you seen the recent batch of games like Alien Isolation, Watch_Dogs, Ryse, Metro Redux etc show very good scaling across 6 CPU cores. Sure DX11 is a pain to extract good threading from and DX12 will make it easier while reducing the overhead enabling devs to do more but it is not like it is an impossible task to get DX11 to support multiple cores.
@tormentos : Have you looked at Brad Wardell's resume. I think he knows more about Direct X than anyone on here lol. If he says that DX 11 is still single CPU bound then its from his hands on experience.
In the DX12 presentation by the DX12 Dev lead he shows you that DX11 is still core 0 bound. One could argue that this is the main reason for the massive changes in DX12. We still haven't heard about what additional effects will be introduced. All we know is that the multiple CPU usage is a game changer in DX12 compared to DX11.
Have you seen the recent batch of games like Alien Isolation, Watch_Dogs, Ryse, Metro Redux etc show very good scaling across 6 CPU cores. Sure DX11 is a pain to extract good threading from and DX12 will make it easier while reducing the overhead enabling devs to do more but it is not like it is an impossible task to get DX11 to support multiple cores.
The main rendering is still CPU core 0 bound. I'm not sure how you can get around the limitations of the API. Do you have proof as well as CPU utilization graphs to back that up.
@tormentos : Have you looked at Brad Wardell's resume. I think he knows more about Direct X than anyone on here lol. If he says that DX 11 is still single CPU bound then its from his hands on experience.
In the DX12 presentation by the DX12 Dev lead he shows you that DX11 is still core 0 bound. One could argue that this is the main reason for the massive changes in DX12. We still haven't heard about what additional effects will be introduced. All we know is that the multiple CPU usage is a game changer in DX12 compared to DX11.
I don't care what so call resume he has,multicore coding has been happening for years on PC,and MS DX11 has support multicore coding since 2009,claim that CPU were using just 1 core is total bullshit.
Crysis 2 fully utilizes four cores and is seemingly unplayable on dual-core processors. The Phenom II X2 560 averaged just 24fps while the Core 2 Duo E8500 was even slower at 23fps. With the aid of HyperThreading, the Core i3 540 managed an average of 56fps.
http://www.techspot.com/review/379-crysis-2-performance/page8.html
Multicore usage has been here for years even Crsysi 2 was using 4 cores on PC.
He was WRONG.
And the PS4 didn't need mantle because mantle emulate consoles API not the other way around.
Here this is a nice article on what DX12 really is,and why people like you are totally blind.
Microsoft adopts Mantle but calls it DX12
GDC 2014: Completely different because it is not called Mantle, just ask MS PR.
Just as SemiAccurate predicted months ago, Microsoft has adopted AMD’s Mantle but are now calling it DX12. The way they did it manages to pull stupidity from the jaws of defeat by breaking compatibility in the name of lock-in.
http://semiaccurate.com/2014/03/18/microsoft-adopts-mantle-calls-dx12/
Now you may kick and cry screaming that semiacurate is bias or wrong,but fact is MS also did this is already with Partially Resident Textures as well.
DX11.2 brings a major new feature titled ‘Tiled Resources’, that allows developers to be able to dynamically place high-resolution textures in a scene, all without placing much extra load on the GPU. The new technology works to make sure that the textures don’t appear blurry when seen at close up.
http://vr-zone.com/articles/directx-11-2-to-bring-tiled-resources-to-windows-8-1-and-the-xbox-one/41876.html#ixzz3G8g7Vsdu
http://vr-zone.com/articles/directx-11-2-to-bring-tiled-resources-to-windows-8-1-and-the-xbox-one/41876.html
So on mid 2013 MS announce it was bringing this incredible new features call Tile Resource...
Wrapping things up, for the time being while Southern Islands will bring hardware support for PRT software support will remain limited. As D3D is not normally extensible it’s really only possible to easily access the feature from other APIs (e.g. OpenGL), which when it comes to games is going to greatly limit the adoption of the technology.
http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/6
When AMD had it on its GPU since 2011 and OpenGL support it when D3D didn't,so 2 years latter MS announce a great new feature Tile Resources which is nothing more than MS adopting PRT but with another name..
MS all it freaking does is hold back gaming with their late ass api,let me guess DX12 will be tied to windows 10 right,.? You people are suckers.
@tormentos : lol I will take his word as well as others over your links. Since you're so confident then I take it you've accepted my wager. We'll find out soon enough...
The main rendering is still CPU core 0 bound. I'm not sure how you can get around the limitations of the API. Do you have proof as well as CPU utilization graphs to back that up.
Better spreading work and not using more than 1 core are 2 totally different things,running Crysis 2 on a dual core showed that,if the load was just on core 1 a 2 core CPU would do fine,but it doesn't.
It uses all 4 cores on a 4 core CPU already stated by benchmarks.
@tormentos : lol I will take his word as well as others over your links. Since you're so confident then I take it you've accepted my wager. We'll find out soon enough...
Oh i know you will take any crap moron over my links...
http://www.bit-tech.net/bits/2008/09/17/directx-11-a-look-at-what-s-coming/6
Hahahaaaaaaaaaaaaaaaaaaa....
You were owned...hahaha
but but but he is a developer..
Can you prove that the memory controllers on PS4 are the same on PC.? Because the memory on PS4 working in a completely different way to how it work on PC,for started it is just 1 fast one,not 2.
But hey do you have actual proof of programs running in GDDR5 to know latency is an issue and that the speed benefit doesn't counter the higher latency.?
Do you have them.? So you are going by what other people say it has higher latency period.
Do you know that agreeing with that moron you just piggy back riding make you a even bigger moron.?
From 10 things that dude post he is wrong in 11.
That bold part is stupid,almost all CPU have L2 cache Intel's Xeon isn't any different and the whole 30 MB come from the fact that each core has 512kb of cache which can't be share between cores,which mean is not 30 MB of cache is 512kb per core so 60 cores = 30MB ass.
The same cache can be found on any CPU regardless of using DDR3 as memory,cache is basically a middle man to speed up data access which in this case does the same thing on DDR3 as it does in GDDR5.
lol can you prove that PS4 is not using a technology standard to control GDDR5? PS4 is not using a completely different way you noob.... All its using is an unified memory pool its no different from any other APU or intergrated based system using primary memory pool for cpu and vram. It just has to fight with the GDDR5 memory
Aww so sad......its a fact that GDDR5 latency are greater then normal DDR3 because of the multiple memory controllers. Your looking at the bare modules themselves, when your using eight 32bit dual 16's input/output, the purpose of so many is to give constant bursts of speed allowing high amount of bandwidth. This creates the latency since you have multiple controllers having to read multiple dram chips then combining that data into the gpu This is were you get the stalls or pauses aka latency, and because of the parallel nature of gpus they can go to the next task instantly out of the order and able to gp back when the stall is over. Memory controllers for CPUs are wider and operate in a linear fashion. Which is why GDDR5 isnt fine for normal cpu's tasks the smaller the job that needs processing the weakness of GDDR5 is seen more.
The intel Xeon Phi are x86 cpu's meant for parallel processing jobs, yes they may have only 512kb of L2 on each core however you totally ignored the interconnect ring its a three stage ring of bidirectional flow of data from the GDDR5. The purpose of the ring to constantly feed the data to the cores and take away what is processed. Because of the nature of GDDR5 the interconnect rings helps overcome the latency aka stalls. When a core or cores misses a chuck of data to be processed a request is sent on one of the rings to the tag what is needed next. This allows the linear fluid flow of data for the co-processer to its job correctly.
@tormentos : Have you looked at Brad Wardell's resume. I think he knows more about Direct X than anyone on here lol. If he says that DX 11 is still single CPU bound then its from his hands on experience.
In the DX12 presentation by the DX12 Dev lead he shows you that DX11 is still core 0 bound. One could argue that this is the main reason for the massive changes in DX12. We still haven't heard about what additional effects will be introduced. All we know is that the multiple CPU usage is a game changer in DX12 compared to DX11.
Have you seen the recent batch of games like Alien Isolation, Watch_Dogs, Ryse, Metro Redux etc show very good scaling across 6 CPU cores. Sure DX11 is a pain to extract good threading from and DX12 will make it easier while reducing the overhead enabling devs to do more but it is not like it is an impossible task to get DX11 to support multiple cores.
The main rendering is still CPU core 0 bound. I'm not sure how you can get around the limitations of the API. Do you have proof as well as CPU utilization graphs to back that up.
As you can see from the selection of images there is fairly even loading across 6 cores in these examples and in some cases 8 cores. The same applies to Intel but due to their higher IPC the graphs show a bit more variation in loading. They also have HT which skews the graphs as well as the HT cores take up some of the load.
@tormentos : Have you looked at Brad Wardell's resume. I think he knows more about Direct X than anyone on here lol. If he says that DX 11 is still single CPU bound then its from his hands on experience.
In the DX12 presentation by the DX12 Dev lead he shows you that DX11 is still core 0 bound. One could argue that this is the main reason for the massive changes in DX12. We still haven't heard about what additional effects will be introduced. All we know is that the multiple CPU usage is a game changer in DX12 compared to DX11.
Have you seen the recent batch of games like Alien Isolation, Watch_Dogs, Ryse, Metro Redux etc show very good scaling across 6 CPU cores. Sure DX11 is a pain to extract good threading from and DX12 will make it easier while reducing the overhead enabling devs to do more but it is not like it is an impossible task to get DX11 to support multiple cores.
The main rendering is still CPU core 0 bound. I'm not sure how you can get around the limitations of the API. Do you have proof as well as CPU utilization graphs to back that up.
As you can see from the selection of images there is fairly even loading across 6 cores in these examples and in some cases 8 cores. The same applies to Intel but due to their higher IPC the graphs show a bit more variation in loading. They also have HT which skews the graphs as well as the HT cores take up some of the load.
Thanks, this is great but what exactly is being run on these cores? Brad's assertion is that the main rendering is not utilizing the other cores. A dev may be able to run other jobs on the cores which maybe what is indicated here?
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
lol can you prove that PS4 is not using a technology standard to control GDDR5? PS4 is not using a completely different way you noob.... All its using is an unified memory pool its no different from any other APU or intergrated based system using primary memory pool for cpu and vram. It just has to fight with the GDDR5 memory
Aww so sad......its a fact that GDDR5 latency are greater then normal DDR3 because of the multiple memory controllers. Your looking at the bare modules themselves, when your using eight 32bit dual 16's input/output, the purpose of so many is to give constant bursts of speed allowing high amount of bandwidth. This creates the latency since you have multiple controllers having to read multiple dram chips then combining that data into the gpu This is were you get the stalls or pauses aka latency, and because of the parallel nature of gpus they can go to the next task instantly out of the order and able to gp back when the stall is over. Memory controllers for CPUs are wider and operate in a linear fashion. Which is why GDDR5 isnt fine for normal cpu's tasks the smaller the job that needs processing the weakness of GDDR5 is seen more.
The intel Xeon Phi are x86 cpu's meant for parallel processing jobs, yes they may have only 512kb of L2 on each core however you totally ignored the interconnect ring its a three stage ring of bidirectional flow of data from the GDDR5. The purpose of the ring to constantly feed the data to the cores and take away what is processed. Because of the nature of GDDR5 the interconnect rings helps overcome the latency aka stalls. When a core or cores misses a chuck of data to be processed a request is sent on one of the rings to the tag what is needed next. This allows the linear fluid flow of data for the co-processer to its job correctly.
Is not using the same memory structure as PC to begin with,but since you are the one holding to the memoryu controllers been a problem it is you who have to prove they are the same.
Apu on PC use DDR3 you moron,not GDDR5 so once again the memory controllers could be rather different,you falsely claimed Xeon phi uses 30MB of L2 Cache to deal with high latency of GDDR5,but in fact it uses is 512kb per core which isn't shared which mean 60 cores 30MB.
You will have a point when you quote intel saying that they have 30MB of l2 cache because of GDDR5 latency.
Basically the Xion Phi is the first CPU to use GDDR5 as memory so there is no benchmark that i know that directly compare who memory is affected by latency in GDDR5 vs DDR3,you are assuming that since latency is bad for CPU because it can cause stalls well that mean higher latency = more stalls and than isn't the case.
Again DDR2 has lower latency than DDR3,but the speed gain was enough to over come the latency problem and in fact DDR3 is now the standard not DDR3 and is faster than DDR2.
Intel's decision to choose GDDR5 memory, said Hazra, was a combination on price, performance and capacity. "For now GDDR was the right solution both from a capacity standpoint and perhaps more important from a performance standpoint. When you have these many cores and FLOPS packed into a chip you have to feed it or you are going to waste the compute power. And compared to DDR, GDDR was the right bandwidth solution for that part. [...] It was a technical decision based on giving it the right memory subsystem and right now for the higher memory bandwidth that this card needs, GDDR was the right solution," said Hazra.
http://www.theinquirer.net/inquirer/news/2185879/intel-final-xeon-phi-configurations
ill wait why you find a link from Intel stating they use 30MB of L2 cache because of GDDR5 been bad on latency...
aw o sad el tormentos still copying and pasting without actually explaining the what and why... lol in making excuse about O pc using DDR3 not GDDR5 in APU's. All cpu's gather and process data in a linear fashion. And with the DDR2 vs DDR3..... All normal DDR's use one to two memory controllers and the added latency from ddr to ddr2 ddr3 is overcome by the *need* of more bandwidth to feed more complex and data hungry items.
\GDDR5 is a totally different but again you totally ignored the non linear and parallel nature GDDR5 in the way its used and the fact that you have multiple memory controllers feeding data from different areas.
Intel using GDDR5 is for parallel workloads that need the bandwidth. Going to run around the fact that PS4 is using a piss poor cpu with memory type that adds latency. And ignore the fact that DX 12 will provide some enhancements for the X1 on cpu side things. even with the additions of DX12 for the X1, both the X1 and PS4 are still using piss poor cpu's.
Your trying your best to downplay the crap its so sad and a desperate attempt.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
That last part is the focus of DirectX 12. The new version of DirectX, and the key Direct3D drivers underlying it, wants to give developers the ability to "fully exploit the CPU," Gosalia said. DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware. "With DirectX 12, we want the fastest way to exploit the GPU," he said.
http://arstechnica.com/gaming/2014/03/microsoft-touts-performance-improvements-for-existing-hardware-in-directx-12/
From your own fu**ing link..hahahaaaaaaaa
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
I freaking told you so i have been telling you this for months DX12 will do shit for the xbox one,because DX12 what does is emulate what consoles have been doing for fu**ing years man how dense you fanboys can be.?
I have been telling you,your alt @tdkmillsy,@StormyJoe,@blackace,@slimdogmilionar,@Tighaman, even to @04dcarraher which is a so call Hermit,DX12 is nothing more than a direct response from MS to Mantle and Opengl lower over head API,it basically bring gains that have been on consoles for years the only morons who ignore this are the ones up there..
This is coming from MS it self..hahahaha
el tormentos is having a melt down quick throw some gas onto the flame!
by the way you havent prove crap besides your desperate attempt in downplaying any of the gains DX12 can provide to the X1. Just be happy that your PS4 god will still out perform the X1 in gaming no matter what they do, because of the higher tiered gpu inside it.
el tormentos is having a melt down quick throw some gas onto the flame!
by the way you havent prove crap besides your desperate attempt in downplaying any of the gains DX12 can provide to the X1. Just be happy that your PS4 god will still out perform the X1 in gaming no matter what they do, because of the higher tiered gpu inside it.
You are a moron confirmed and a MS suck up..hahahaaa
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DX12 = xbox one api..lol
As simple as that,just like Tile Resources is Partially Resident Textures wit another name to fool morons,MS are masters in getting int the bus late and pretending they were the first in line..
DX12 and Tile Resources prove that both are here and have been for a while DX12 arrive in 2015 and Tile resources arrived in late 2013 when it is been use on GCN since late 2011 and by Opengl to.
Deny it if you dare.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
That last part is the focus of DirectX 12. The new version of DirectX, and the key Direct3D drivers underlying it, wants to give developers the ability to "fully exploit the CPU," Gosalia said. DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware. "With DirectX 12, we want the fastest way to exploit the GPU," he said.
http://arstechnica.com/gaming/2014/03/microsoft-touts-performance-improvements-for-existing-hardware-in-directx-12/
From your own fu**ing link..hahahaaaaaaaa
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
I freaking told you so i have been telling you this for months DX12 will do shit for the xbox one,because DX12 what does is emulate what consoles have been doing for fu**ing years man how dense you fanboys can be.?
I have been telling you,your alt @tdkmillsy,@StormyJoe,@blackace,@slimdogmilionar,@Tighaman, even to @04dcarraher which is a so call Hermit,DX12 is nothing more than a direct response from MS to Mantle and Opengl lower over head API,it basically bring gains that have been on consoles for years the only morons who ignore this are the ones up there..
This is coming from MS it self..hahahaha
Again, buy reducing the burden on the main thread, executions will happen faster. SO, there will be less processing time, which will increase performance.
You are self owning via ignorance...
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
That last part is the focus of DirectX 12. The new version of DirectX, and the key Direct3D drivers underlying it, wants to give developers the ability to "fully exploit the CPU," Gosalia said. DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware. "With DirectX 12, we want the fastest way to exploit the GPU," he said.
http://arstechnica.com/gaming/2014/03/microsoft-touts-performance-improvements-for-existing-hardware-in-directx-12/
From your own fu**ing link..hahahaaaaaaaa
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
I freaking told you so i have been telling you this for months DX12 will do shit for the xbox one,because DX12 what does is emulate what consoles have been doing for fu**ing years man how dense you fanboys can be.?
I have been telling you,your alt @tdkmillsy,@StormyJoe,@blackace,@slimdogmilionar,@Tighaman, even to @04dcarraher which is a so call Hermit,DX12 is nothing more than a direct response from MS to Mantle and Opengl lower over head API,it basically bring gains that have been on consoles for years the only morons who ignore this are the ones up there..
This is coming from MS it self..hahahaha
Also from that same article
"It's all up to us, and that's the way we like it," added Turn 10's Chris Tector, who demonstrated a version of Forza Motorsport 5 ported from Xbox One DirectX 11 to PC DirectX 12
So that means that the xbox one current API is Direct x11. IF i'm reading correctly that sentence right there says Forza was ported from Directx 11 on xbox to Directx 12 on PC. SO xbox one current Api would be directx 11.
Xbox One’s eventual DirectX 12 upgrade....
“They might be able to push more triangles to the GPU.......
Torok suggested that an improving ability to hit 1080p resolutions will actually come from a change in the way that “graphics programmers think about their pipelines”, rather than a magic DX 12 update.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
That last part is the focus of DirectX 12. The new version of DirectX, and the key Direct3D drivers underlying it, wants to give developers the ability to "fully exploit the CPU," Gosalia said. DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware. "With DirectX 12, we want the fastest way to exploit the GPU," he said.
http://arstechnica.com/gaming/2014/03/microsoft-touts-performance-improvements-for-existing-hardware-in-directx-12/
From your own fu**ing link..hahahaaaaaaaa
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
I freaking told you so i have been telling you this for months DX12 will do shit for the xbox one,because DX12 what does is emulate what consoles have been doing for fu**ing years man how dense you fanboys can be.?
I have been telling you,your alt @tdkmillsy,@StormyJoe,@blackace,@slimdogmilionar,@Tighaman, even to @04dcarraher which is a so call Hermit,DX12 is nothing more than a direct response from MS to Mantle and Opengl lower over head API,it basically bring gains that have been on consoles for years the only morons who ignore this are the ones up there..
This is coming from MS it self..hahahaha
Also from that same article
"It's all up to us, and that's the way we like it," added Turn 10's Chris Tector, who demonstrated a version of Forza Motorsport 5 ported from Xbox One DirectX 11 to PC DirectX 12
So that means that the xbox one current API is Direct x11. IF i'm reading correctly that sentence right there says Forza was ported from Directx 11 on xbox to Directx 12 on PC. SO xbox one current Api would be directx 11.
Xbox One’s eventual DirectX 12 upgrade....
“They might be able to push more triangles to the GPU.......
Torok suggested that an improving ability to hit 1080p resolutions will actually come from a change in the way that “graphics programmers think about their pipelines”, rather than a magic DX 12 update.
Forza is the closest title to enable DX12 but that does not mean that it has multithreaded the dispatch to the GPU commands. Also he was discussing porting Forza 5 "from Xbox One DirectX 11 to PC DirectX 12 in about four man-months, with a large increase in performance". So that quote needs context.
In any event we know :
Tormentos should be banned if he is wrong. :)
Off topic : Isn't it crazy that the Xbox has LZ decode - encode processors that are not even used now? Why would engineers waste their budget with that stuff? Programmers aren't able to access them directly, they operate asynchronously.
One has to be curious even if you think the Xbox is using off the shelf components. Clearly it is not ..
Also from that same article
"It's all up to us, and that's the way we like it," added Turn 10's Chris Tector, who demonstrated a version of Forza Motorsport 5 ported from Xbox One DirectX 11 to PC DirectX 12
So that means that the xbox one current API is Direct x11. IF i'm reading correctly that sentence right there says Forza was ported from Directx 11 on xbox to Directx 12 on PC. SO xbox one current Api would be directx 11.
Xbox One’s eventual DirectX 12 upgrade....
“They might be able to push more triangles to the GPU.......
Torok suggested that an improving ability to hit 1080p resolutions will actually come from a change in the way that “graphics programmers think about their pipelines”, rather than a magic DX 12 update.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
Let me quote it again...hahahaaa
Is DX 11.2 but on xbox one has lower over head,again i told you so,it is your misunderstanding of what DX12 was what make you claim all the tings you did.
In fact the article doesn't even quote Turn 10 right.
Turn 10 tried to port Forza 5 to DirectX 11, but found massive CPU overhead created a lot of stuttering. The DirectX 12 demo we saw, on the other hand, ran at a smooth framerate primarily because many of its new features—such as bundling resources—were drawn directly from Microsoft’s Xbox One tools.
http://www.pcworld.com/article/2110085/next-gen-directx-12-graphics-tech-revealed-hitting-microsoft-platforms-in-2015.html
Hahaha Turn 10 claim that they ported Forza 5 to dx11 on PC and that found massive CPU over head,but porting it to DX12 ran smooth because they used features pulled from the xbox one tools directly..hahaha
Don't partially quote what you one that last link is edited,in fact what they really claim is that with DX12 the xbox one could be able to push more triangles but won't be able to shade them which basically mean total shit and defeat the whole purpose of pushing more triangles.
“Most games out there can’t go 1080p because the additional load on the shading units would be too much. For all these games DX12 is not going to change anything,” Torok explained.
“They might be able to push more triangles to the GPU but they are not going to be able to shade them, which defeats the purpose,” he continued.
You are a joke from your own link..ahahaha
DX12 will change nothing...
Again, buy reducing the burden on the main thread, executions will happen faster. SO, there will be less processing time, which will increase performance.
You are self owning via ignorance...
You are not getting it they have been reduce already,DX12 will not deliver that to the xbox one because it is on xbox one already,in fact Dx12 take it from the xbox..
""DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.""
What he is saying there is that DX12 on PC will show the gains because it copy console like API.
""Turn 10 tried to port Forza 5 to DirectX 11, but found massive CPU overhead created a lot of stuttering. The DirectX 12 demo we saw, on the other hand, ran at a smooth framerate primarily because many of its new features—such as bundling resources—were drawn directly from Microsoft’s Xbox One tools.""
Here is another confirmation,the xbox one already have those gains since day 1,PC don't have them so when it hit PC on PC you will see some gain on xbox one nothing.
No the only ignorant and on denial it is you,you are not reading what i quoted.
DX12 on PC = Xbox one xbox 360 API.
Just like the PS4 doesn't need mantle the xbox one doesn't need it either it has mantle like gains since day 1.
Of course the 360 version doesn't disappear. But the Master Chief collection is only available on Xbox one. It is going to be epic.
I expect huge amounts of damage control to start appearing within the next couple of weeks as marketing and information is ramped up.
It doesn't matter all the content of the collection can be play now else where,so you don't need and xbox one to play any of those,unless you refer to Halo 2 online,which is not wort 60 dollars,worse is the only from the pack that is not on xbox 360 yet it is not 1080p 60FPS.
Is not going to be epic,it just a bunch of Halo games already on other consoles group together,the more you hype this shit the more TLOU make a dent on the xbox one library,you can't have it both ways and TLOU has 95% on meta that is higher than any xbox one game so far and is a new fresh series.
To played them at 1080P native / 60fps you need an XBox One. You're the most butthurt insecure troll on SW. I_can_haz was at your level as well, but he has disappeared from SW. Good riddance. GrenadeLicker is getting closer and closer to your level every day. LMAO!! You guys are a sad, sad bunch. It's great entertainment though. I rarely read your garbage, but everytime I see one of your posts I laugh and smile. That's entertainment!!
Forza is the closest title to enable DX12 but that does not mean that it has multithreaded the dispatch to the GPU commands. Also he was discussing porting Forza 5 "from Xbox One DirectX 11 to PC DirectX 12 in about four man-months, with a large increase in performance". So that quote needs context.
In any event we know :
Tormentos should be banned if he is wrong. :)
"Turn 10 tried to port Forza 5 to DirectX 11, but found massive CPU overhead created a lot of stuttering. The DirectX 12 demo we saw, on the other hand, ran at a smooth framerate primarily because many of its new features—such as bundling resources—were drawn directly from Microsoft’s Xbox One tools.""
This ^^.
DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.
And this ^^.
GTFO from MS own fu**ing team.
Turn 10 >>>>>>>>>>>>>>>>>>>>>>>>> Brad Wardell.:)
DX12 copy the xbox one api....hahahaaaaaaaaaaaa
2 different sources both come from MS it self,like i have been saying all alone the xbox one already has those gains,look at how it say that Turn 10 ported the game to DX11 and faced massive CPU over head but that on DX12 on PC they didn't face it an run smooth,yeah that is because the use the xbox one API gains.
In fact Forza 5 is 60FPS lock it doesn't have problems with frames or over head.
You are the one who should be banned for quoting morons who talk with Misterxmedia..hahaha
“Most games out there can’t go 1080p because the additional load on the shading units would be too much. For all these games DX12 is not going to change anything,” Torok explained.
“They might be able to push more triangles to the GPU but they are not going to be able to shade them, which defeats the purpose,” he continued.
hahaha The Witcher developer on DX12 not been the cause for the lower resolution but the lack of shading power on the GPU..hahaha
To played them at 1080P native / 60fps you need an XBox One. You're the most butthurt insecure troll on SW. I_can_haz was at your level as well, but he has disappeared from SW. Good riddance. GrenadeLicker is getting closer and closer to your level every day. LMAO!! You guys are a sad, sad bunch. It's great entertainment though. I rarely read your garbage, but everytime I see one of your posts I laugh and smile. That's entertainment!!
Yeah the same with The last of us,which you downplay as been on PS3 since the very first moment it was announce. hahaa
Wait now you care for 1080p 60 FPS.? You a launch xbox one owner.?
Look whos talking about insecure trolls the fake manticore.
Is a multiplatform get over it been 1080p 60FPS doesn't make it a new series.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
You do know that the DirectX API does more than just rendering don't you. It is a collection of smaller APIs that have specific functions like 3D, Audio etc. So with Dx11 the rendering thread is single core but you can run a bunch of other stuff on the other cores, with DX12 you can multithread the rendering if you need but if your physics, AI etc is taking up CPU runtime then you only really need to split the rending if you exceed the budget for a single core or if your engine is not fully threaded and splitting the rendering will lead to more balance across cores. Now with DX11 that is more likely to happen because it also has higher CPU overhead when issuing draw calls, DX12 will reduce the overhead for draw calls and other rendering tasks the CPU issues which reduces the need to have rendering on multiple threads unless you want to display a lot of units. You will see it more in RTS and TBS games where the game gets a lot slower as you have more units because the CPU starts to become a bottleneck.
Those graphs all show pretty good CPU usage across multiple cores, remember that these games are limited by the console CPU so what might be 50% util on a PC could easily be 90% on the console.
aw o sad el tormentos still copying and pasting without actually explaining the what and why... lol in making excuse about O pc using DDR3 not GDDR5 in APU's. All cpu's gather and process data in a linear fashion. And with the DDR2 vs DDR3..... All normal DDR's use one to two memory controllers and the added latency from ddr to ddr2 ddr3 is overcome by the *need* of more bandwidth to feed more complex and data hungry items.
\GDDR5 is a totally different but again you totally ignored the non linear and parallel nature GDDR5 in the way its used and the fact that you have multiple memory controllers feeding data from different areas.
Intel using GDDR5 is for parallel workloads that need the bandwidth. Going to run around the fact that PS4 is using a piss poor cpu with memory type that adds latency. And ignore the fact that DX 12 will provide some enhancements for the X1 on cpu side things. even with the additions of DX12 for the X1, both the X1 and PS4 are still using piss poor cpu's.
Your trying your best to downplay the crap its so sad and a desperate attempt.
I would really like you to show some documentation that the memory controller (or the bus) for GDDR5 is higher latency than for DDR3. I have shown documentation showing that act to act latency on GDDR5 is very similar to DDR3 (@tormentos linked to it), it stands to reason that to perform that test it requires the memory modules to be attached to a memory controller. Further the fact there are GPUs that use GDDR5 in the high end model but DDR3 in the low end model shows that the memory controller is the same for both memory types since that is part of the GPU die and no manufacturer would bother to create a GPU die specifically for a DDR3 variant because the cost to do that would be much higher than the return. They also would not create a GPU with two memory controllers to support both memory types because the extra die area required would also hurt the very slim margins in that sector of the dGPU market.
Without evidence to support your claim that the PS4 memory system has a latency penalty I am afraid that I cannot consider your thoughts on the matter anything other than uninformed opinion.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
You do know that the DirectX API does more than just rendering don't you. It is a collection of smaller APIs that have specific functions like 3D, Audio etc. So with Dx11 the rendering thread is single core but you can run a bunch of other stuff on the other cores, with DX12 you can multithread the rendering if you need but if your physics, AI etc is taking up CPU runtime then you only really need to split the rending if you exceed the budget for a single core or if your engine is not fully threaded and splitting the rendering will lead to more balance across cores. Now with DX11 that is more likely to happen because it also has higher CPU overhead when issuing draw calls, DX12 will reduce the overhead for draw calls and other rendering tasks the CPU issues which reduces the need to have rendering on multiple threads unless you want to display a lot of units. You will see it more in RTS and TBS games where the game gets a lot slower as you have more units because the CPU starts to become a bottleneck.
Those graphs all show pretty good CPU usage across multiple cores, remember that these games are limited by the console CPU so what might be 50% util on a PC could easily be 90% on the console.
The main bottleneck appears to be the rendering in DX11.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
You do know that the DirectX API does more than just rendering don't you. It is a collection of smaller APIs that have specific functions like 3D, Audio etc. So with Dx11 the rendering thread is single core but you can run a bunch of other stuff on the other cores, with DX12 you can multithread the rendering if you need but if your physics, AI etc is taking up CPU runtime then you only really need to split the rending if you exceed the budget for a single core or if your engine is not fully threaded and splitting the rendering will lead to more balance across cores. Now with DX11 that is more likely to happen because it also has higher CPU overhead when issuing draw calls, DX12 will reduce the overhead for draw calls and other rendering tasks the CPU issues which reduces the need to have rendering on multiple threads unless you want to display a lot of units. You will see it more in RTS and TBS games where the game gets a lot slower as you have more units because the CPU starts to become a bottleneck.
Those graphs all show pretty good CPU usage across multiple cores, remember that these games are limited by the console CPU so what might be 50% util on a PC could easily be 90% on the console.
The main bottleneck appears to be the rendering in DX11.
Draw calls are defiantly slower in DX11 and DX12 will enhance this no doubt but if the GPU is being pushed anyway it does not free up the CPU to push more rendering tasks although it will free it up to enhance other threads. It depends on what the utilisation is like on the console and as there are no graphs showing this in a gaming scenario we do not have any data to go on. My estimation is DX12 will no really allow better graphics but it could enable more runtime to be used for AI, physics or other options.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
You do know that the DirectX API does more than just rendering don't you. It is a collection of smaller APIs that have specific functions like 3D, Audio etc. So with Dx11 the rendering thread is single core but you can run a bunch of other stuff on the other cores, with DX12 you can multithread the rendering if you need but if your physics, AI etc is taking up CPU runtime then you only really need to split the rending if you exceed the budget for a single core or if your engine is not fully threaded and splitting the rendering will lead to more balance across cores. Now with DX11 that is more likely to happen because it also has higher CPU overhead when issuing draw calls, DX12 will reduce the overhead for draw calls and other rendering tasks the CPU issues which reduces the need to have rendering on multiple threads unless you want to display a lot of units. You will see it more in RTS and TBS games where the game gets a lot slower as you have more units because the CPU starts to become a bottleneck.
Those graphs all show pretty good CPU usage across multiple cores, remember that these games are limited by the console CPU so what might be 50% util on a PC could easily be 90% on the console.
The main bottleneck appears to be the rendering in DX11.
Draw calls are defiantly slower in DX11 and DX12 will enhance this no doubt but if the GPU is being pushed anyway it does not free up the CPU to push more rendering tasks although it will free it up to enhance other threads. It depends on what the utilisation is like on the console and as there are no graphs showing this in a gaming scenario we do not have any data to go on. My estimation is DX12 will no really allow better graphics but it could enable more runtime to be used for AI, physics or other options.
Mr Wardell thinks that the CPU on consoles are the bottleneck. He is not saying that graphics will be better but he is saying that he is able to issue more GPU commands from the CPU because of DX12.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
You do know that the DirectX API does more than just rendering don't you. It is a collection of smaller APIs that have specific functions like 3D, Audio etc. So with Dx11 the rendering thread is single core but you can run a bunch of other stuff on the other cores, with DX12 you can multithread the rendering if you need but if your physics, AI etc is taking up CPU runtime then you only really need to split the rending if you exceed the budget for a single core or if your engine is not fully threaded and splitting the rendering will lead to more balance across cores. Now with DX11 that is more likely to happen because it also has higher CPU overhead when issuing draw calls, DX12 will reduce the overhead for draw calls and other rendering tasks the CPU issues which reduces the need to have rendering on multiple threads unless you want to display a lot of units. You will see it more in RTS and TBS games where the game gets a lot slower as you have more units because the CPU starts to become a bottleneck.
Those graphs all show pretty good CPU usage across multiple cores, remember that these games are limited by the console CPU so what might be 50% util on a PC could easily be 90% on the console.
The main bottleneck appears to be the rendering in DX11.
Draw calls are defiantly slower in DX11 and DX12 will enhance this no doubt but if the GPU is being pushed anyway it does not free up the CPU to push more rendering tasks although it will free it up to enhance other threads. It depends on what the utilisation is like on the console and as there are no graphs showing this in a gaming scenario we do not have any data to go on. My estimation is DX12 will no really allow better graphics but it could enable more runtime to be used for AI, physics or other options.
Mr Wardell thinks that the CPU on consoles are the bottleneck. He is not saying that graphics will be better but he is saying that he is able to issue more GPU commands from the CPU because of DX12.
That is what reducing CPU overhead means. How the extra runtime is used depends on the game, some games will use it all for better AI, other might use it to put more stuff on screen it depends entirely on what the dev feels the best use of it is.
I would bet those graphs are not indicative of whats stated below:
DirectX 11 introduced the ability to put together sequences of instructions for the GPU on multiple threads. However, actual dispatch to the GPU was still performed on a single thread, and the design of DirectX 11 means that some of the work must be done on that single thread. Moreover, not all GPU drivers support this capability. DirectX 12 will increase the amount of work that's performed on the different cores, and similarly reduce the burden on the main thread.
Microsoft touts performance improvements for existing hardware in DirectX 12
You do know that the DirectX API does more than just rendering don't you. It is a collection of smaller APIs that have specific functions like 3D, Audio etc. So with Dx11 the rendering thread is single core but you can run a bunch of other stuff on the other cores, with DX12 you can multithread the rendering if you need but if your physics, AI etc is taking up CPU runtime then you only really need to split the rending if you exceed the budget for a single core or if your engine is not fully threaded and splitting the rendering will lead to more balance across cores. Now with DX11 that is more likely to happen because it also has higher CPU overhead when issuing draw calls, DX12 will reduce the overhead for draw calls and other rendering tasks the CPU issues which reduces the need to have rendering on multiple threads unless you want to display a lot of units. You will see it more in RTS and TBS games where the game gets a lot slower as you have more units because the CPU starts to become a bottleneck.
Those graphs all show pretty good CPU usage across multiple cores, remember that these games are limited by the console CPU so what might be 50% util on a PC could easily be 90% on the console.
The main bottleneck appears to be the rendering in DX11.
Draw calls are defiantly slower in DX11 and DX12 will enhance this no doubt but if the GPU is being pushed anyway it does not free up the CPU to push more rendering tasks although it will free it up to enhance other threads. It depends on what the utilisation is like on the console and as there are no graphs showing this in a gaming scenario we do not have any data to go on. My estimation is DX12 will no really allow better graphics but it could enable more runtime to be used for AI, physics or other options.
Mr Wardell thinks that the CPU on consoles are the bottleneck. He is not saying that graphics will be better but he is saying that he is able to issue more GPU commands from the CPU because of DX12.
That is what reducing CPU overhead means. How the extra runtime is used depends on the game, some games will use it all for better AI, other might use it to put more stuff on screen it depends entirely on what the dev feels the best use of it is.
Agreed however tormentos has a different opinion.
Draw calls are defiantly slower in DX11 and DX12 will enhance this no doubt but if the GPU is being pushed anyway it does not free up the CPU to push more rendering tasks although it will free it up to enhance other threads. It depends on what the utilisation is like on the console and as there are no graphs showing this in a gaming scenario we do not have any data to go on. My estimation is DX12 will no really allow better graphics but it could enable more runtime to be used for AI, physics or other options.
But that is the problem Drawcalls have never been an issue on consoles,which is why you will see an increase on those on PC but not on xbox one because those were there since the beginning and last gen to.
Last gen the PS3 and xbox 360 could do 10,000,20,000 draw call per frame or even more at 30/60 FPS when on PC you did more than 3,000 and you got into performance troubles,now you can do 100,000 on PC with Mantle.
Mr Wardell thinks that the CPU on consoles are the bottleneck. He is not saying that graphics will be better but he is saying that he is able to issue more GPU commands from the CPU because of DX12.
Which is funny because the Witcher developer which is well known state other wise,about DX12 and those so call gains.
CPU are not bottleneck on console they have never been,unless you mean they reach the top performance they can get which will not change at all,since those gains DX12 bring to PC have been on consoles for year,which is what you and other blind fanboys refuse to admit even that i fu**ing quote MS on it,DX12 gains = Console gains.
Lol Tormentos still claiming DX12 won't do anything for the Xbox One, hahaha this is getting sad. Sure it's nothing major, but it will help, already stated by numerous developers and people that actually know something about it. Are you one of the early adopters of DX12, Tormentos? Or are you just a copy/paste-guy that takes snippets from articles as long as theylook good for you. Sad cow is sad.
“Most games out there can’t go 1080p because the additional load on the shading units would be too much. For all these games DX12 is not going to change anything,” Torok explained.
“They might be able to push more triangles to the GPU but they are not going to be able to shade them, which defeats the purpose,” he continued.
http://www.nowgamer.com/dx-12-wont-fix-xbox-ones-1080p-issues-says-the-witcher-3-dev/
Developers like this one.?
DX12 will do nothing because already did for the xbox one it has been doing since launch,stated by MS it self..hahaha
Agreed however tormentos has a different opinion.
I don't disagree with lower CPU over head,my argument is the lower CPU over head is already on xbox one,because lower CPU over head is a FEATURE OF CONSOLES not PC which is why on PC lower CPU over head will yield some gain,on xbox one that will not happen because lower CPU over head is already there.
Stated by MS it self,you are just to blind to see it,period no matter what the xbox one can't benefit from something it already has..
Draw calls are defiantly slower in DX11 and DX12 will enhance this no doubt but if the GPU is being pushed anyway it does not free up the CPU to push more rendering tasks although it will free it up to enhance other threads. It depends on what the utilisation is like on the console and as there are no graphs showing this in a gaming scenario we do not have any data to go on. My estimation is DX12 will no really allow better graphics but it could enable more runtime to be used for AI, physics or other options.
But that is the problem Drawcalls have never been an issue on consoles,which is why you will see an increase on those on PC but not on xbox one because those were there since the beginning and last gen to.
Last gen the PS3 and xbox 360 could do 10,000,20,000 draw call per frame or even more at 30/60 FPS when on PC you did more than 3,000 and you got into performance troubles,now you can do 100,000 on PC with Mantle.
Mr Wardell thinks that the CPU on consoles are the bottleneck. He is not saying that graphics will be better but he is saying that he is able to issue more GPU commands from the CPU because of DX12.
Which is funny because the Witcher developer which is well known state other wise,about DX12 and those so call gains.
CPU are not bottleneck on console they have never been,unless you mean they reach the top performance they can get which will not change at all,since those gains DX12 bring to PC have been on consoles for year,which is what you and other blind fanboys refuse to admit even that i fu**ing quote MS on it,DX12 gains = Console gains.
Lol Tormentos still claiming DX12 won't do anything for the Xbox One, hahaha this is getting sad. Sure it's nothing major, but it will help, already stated by numerous developers and people that actually know something about it. Are you one of the early adopters of DX12, Tormentos? Or are you just a copy/paste-guy that takes snippets from articles as long as theylook good for you. Sad cow is sad.
“Most games out there can’t go 1080p because the additional load on the shading units would be too much. For all these games DX12 is not going to change anything,” Torok explained.
“They might be able to push more triangles to the GPU but they are not going to be able to shade them, which defeats the purpose,” he continued.
http://www.nowgamer.com/dx-12-wont-fix-xbox-ones-1080p-issues-says-the-witcher-3-dev/
Developers like this one.?
DX12 will do nothing because already did for the xbox one it has been doing since launch,stated by MS it self..hahaha
Agreed however tormentos has a different opinion.
I don't disagree with lower CPU over head,my argument is the lower CPU over head is already on xbox one,because lower CPU over head is a FEATURE OF CONSOLES not PC which is why on PC lower CPU over head will yield some gain,on xbox one that will not happen because lower CPU over head is already there.
Stated by MS it self,you are just to blind to see it,period no matter what the xbox one can't benefit from something it already has..
Normally that would be entirely accurate but the Xbox One API is closer to the PC Spec of DX11 than I thought. The Metro devs talk about it in their interview with Digital Foundry. There are closer to the metal and custom API options you can use on Xbox One but that is more dev intensive and requires a lot more work. That is possibly how MS were able to get both Destiny and Diablo 3 upto 1080p by using those tools instead of the standard DX11 SDK. What DX12 will do is make those custom API tools a lot easier and quicker for devs to use so they can be coding closer to the metal without the extra dev time that is currently required.
It will not mean a huge jump up in performance because it shifts the baseline so what it will probably mean is that AI in games is improved or the CPU does some more physics or you get more on screen enemies or whatever else the devs think will enhance their game with the extra CPU runtime.
I would really like you to show some documentation that the memory controller (or the bus) for GDDR5 is higher latency than for DDR3. I have shown documentation showing that act to act latency on GDDR5 is very similar to DDR3 it stands to reason that to perform that test it requires the memory modules to be attached to a memory controller. Further the fact there are GPUs that use GDDR5 in the high end model but DDR3 in the low end model shows that the memory controller is the same for both memory types since that is part of the GPU die and no manufacturer would bother to create a GPU die specifically for a DDR3 variant because the cost to do that would be much higher than the return. They also would not create a GPU with two memory controllers to support both memory types because the extra die area required would also hurt the very slim margins in that sector of the dGPU market.
Without evidence to support your claim that the PS4 memory system has a latency penalty I am afraid that I cannot consider your thoughts on the matter anything other than uninformed opinion.
Here is an example, What DDR2 latencies with different types of brands vs multiple memory types like GDDR3 and GDDR5. Fact is that GDDR5 has multiple memory controllers and the more you have the more latency it causes. 256bit bus uses 8 of them (32x8 =256 bit) GDDR5 is made for bursts and high amount of bandwidth where the parallel nature and massive amount processing power of gpu's they do not care about the latency. Unless the these jaguars have massive amount of onboard cache GDDR5 will cause more of an issue then DDR3.
"Global Memory" is device memory, either dedicated in the case of GPU or shared system memory in the case of APU. It can hold any data type and can be read or written and accessed by any thread running on the GP.
While not all GPUs cache global memory, they do have TLB caches - just like modern CPUs. The "random in-page" access pattern that Sandra uses is especially designed to avoid TLB misses and thus measure the "real" cache/memory latencies. The "full random" access pattern can be used to measure TLB miss penalties where desired."
This means once the cache is full and processed they have to get more data from memory shows the latency between the different types of memory and buses.
I would really like you to show some documentation that the memory controller (or the bus) for GDDR5 is higher latency than for DDR3. I have shown documentation showing that act to act latency on GDDR5 is very similar to DDR3 it stands to reason that to perform that test it requires the memory modules to be attached to a memory controller. Further the fact there are GPUs that use GDDR5 in the high end model but DDR3 in the low end model shows that the memory controller is the same for both memory types since that is part of the GPU die and no manufacturer would bother to create a GPU die specifically for a DDR3 variant because the cost to do that would be much higher than the return. They also would not create a GPU with two memory controllers to support both memory types because the extra die area required would also hurt the very slim margins in that sector of the dGPU market.
Without evidence to support your claim that the PS4 memory system has a latency penalty I am afraid that I cannot consider your thoughts on the matter anything other than uninformed opinion.
Here is an example, What DDR2 latencies with different types of brands vs multiple memory types like GDDR3 and GDDR5. Fact is that GDDR5 has multiple memory controllers and the more you have the more latency it causes. 256bit bus uses 8 of them (32x8 =256 bit) GDDR5 is made for bursts and high amount of bandwidth where the parallel nature and massive amount processing power of gpu's they do not care about the latency. Unless the these jaguars have massive amount of onboard cache GDDR5 will cause more of an issue then DDR3.
"Global Memory" is device memory, either dedicated in the case of GPU or shared system memory in the case of APU. It can hold any data type and can be read or written and accessed by any thread running on the GP.
While not all GPUs cache global memory, they do have TLB caches - just like modern CPUs. The "random in-page" access pattern that Sandra uses is especially designed to avoid TLB misses and thus measure the "real" cache/memory latencies. The "full random" access pattern can be used to measure TLB miss penalties where desired."
This means once the cache is full and processed they have to get more data from memory shows the latency between the different types of msemory and buses.
That graph is showing latency as a function of clock cycles. Yes in that scenario the cycle latency for GDDR5 is higher the DDR3 but because GDDR5 has more clock cycles the latency as a function of time is similar for both as the Hynix data sheets have shown.
Now I am looking at some latency benchmarks for Quad vs Dual channel memory on the X79 motherboard and there is no difference in latency as you can see here (picture on left). This is comparing 2x 64bit buses to 4x 64bit buses and is showing no difference in latency at all. I see no reason to assume that 8x 32bit buses would incur a latency penalty when going from 2 to 4 64bit buses does not.
It is blatantly obvious though that when L1 cache has lower latency than L2 cache which has lower latency than L3 cache which has lower latency than system memory. Why do you think server CPUs have such large L3 cache sizes compared to the desktop counterparts, it is to avoid trips to main memory. These CPUs are the same spec so if one has to make a trip to main memory, so does the other and the overall trip time is practically the same for both systems.
This PS4 has higher memory latency myth has to stop, it is bogus and I have debunked it many times now and it is just getting tiresome.
Please Log In to post.
Log in to comment