What is it with you guys and "meltdowns"
Apparently, whenever someone posts a response to any other users post here on this board, they are having a "meltdown"!
Oopse :( I just replied to your post, you know what that means?
@leonkennedy97:God of War and Spiderman haven't even released yet, and Spiderman definitely doesn't look top 10 from recent gameplay.
That's a load of BS. Any game on PS4 Pro is also on PS4, and likely original XBone and XBoneS and soon to be on XBoneX. Point being the systems "holding devs back" are the lowest performing, in this case any gripes anybody has with the lack of power on XBoneX or PS4 Pro are invalid as original XBone, XBoneS and PS4's are the starting point for multiplat games. Insulting the Pro is only further Insulting anything behind it. Of the soon to be 5 consoles between M$ and Sony on the market, the Pro will be second most powerful. Last place is what holds devs back, not first or second of five. Talented devs can get BotW running on a WiiU, if you can't hack it on a PS4 Pro maybe think of a new line of work...
@kingtito: lmao so you called me quads alt and now you call me a cow when I'm far from it. We all called your bullshit and you claim everyone else is having a meltdown. You're so obsessed with quad. It's sad. Let it go. You'll feel better kid.
Hey if it walks like a duck, sounds like a duck by golly it must be a duck. Even if you aren't a cow, I doubt it, you're still in here melting down because I said quack is having a meltdown. It obviously went above your head son but I'm curious as to why you felt the need to come in trying to defend the meltdown king quack?
@Basinboy: There is no such thing as what? The games I mentioned are easily some of the best looking games available, period end of story. The advantage to that is they spend their time on one spec as opposed to 5. I don't care what kind of PC snob you are . The PS4 pro and X1X deliver superb IQ.
Devs will develop around the base ps4 since that's where sales are, ps4 pro if anything is a positive thing for devs.
What an idiot, is he new to video games? This has been the case for generations.
@lexxluger: Ah ha! Look at this meltdown! Fucking cow lem. Clem.
I think I'm getting the hang of this.
You need more emojis, graphs, and exclamation marks.
This also applies to about 90% of gaming PC's where most people game on a budget. Targeting super high end reduces the pool to a limited number of people. This developer probably wants to do some cool things and when he's not being "held back" by hardware he'll probably be held back once he discovers the complexity of these alleged games he wants to make. Or he's talking about high res assets and stuff and at this point... I just don't care... please give me fun games. Game's like sea of thieves actually look like they are bringing something a bit new to the table instead of more mundane crap with prettier textures.
This generation has been the weakest competitively in console history
8800 GTX was released a few days ahead of PS3.
AMD's missing in action with PC's high end GPUs mirrors ATI's missing in action with PC's high end GPUs in the 2005/2006.
@leonkennedy97: and they'd all perform better on PC given the chance
That may not be entirely true. One of the things consoles actually do better than PC is a unified memory architecture. ND uses a lot of gpu compute power and PC's have a bottle neck pushing data back and forth over the PCI express bus. This is one of the area where consoles have a significant advantage if developers use it excessively.
FOR INSTANCE. What if xbox x uses it's extra GPU power not for 4k, but for physics or something. Those settings may work significantly worse on PC on any hardware. This is something a developer can choose to do.
I always ask myself "Do these people actually know what they're talking about?"
waahahah's view point is OK.
It's the latency with CPU -->CPU memory --(PCI-E)--> GPU memory ---> GPU ----> GPU memory --(PCI-E)--> CPU memory ---> CPU process.
On consoles, CPU -->memory --> GPU ----> memory ---> CPU process.
To compensate, Intel AVX v2 gains GPU style gather instructions.
Intel Haswell quad-core with a 4GHz Turbo mode will offer 512 GFLOPS FP32 via AVX_v2-256
Intel Skylake X quad-core with a 4GHz Turbo mode will offer 1 TFLOPS FP32 via AVX_v3-512
Intel Skylake X eight-core with a 4GHz Turbo mode will offer 2 TFLOPS FP32 via AVX_v3-512
The problem with XBO is the lack of spare GCN CUs for physics work.
If X1X's 10 CU (1.5 TFLOPS at 1172 Mhz ) from 40 CU resource was allocated for physics work, XBO can't run this game.
If PS4 Pro 9 CU (1 TFLOPS at 914 Mhz ) from 36 CU resource was allocated for physics work, PS4's render is gimped.
Another problem....
If PS4 is dumped, PS4 Pro's maximum software optimizations involves compute shader tile to GPU 2MB L2 cache and PRT to fake a large texture memory storage. PS4 Pro hardware profile games wouldn't work on PS4.
On NVIDIA Maxwell/Pascal, compute/shader/render tile automatically changes size according to hardware's GPU L2 cache size.
Atm, the delay for RX-Vega is mostly due to drivers with brand new tile render to L2 cache feature.
@waahahah: but to depend on the premise that a developer's expertise can result in greater performance/fidelity requires assuming that they cannot or will not do exercise the same with respect to the greater resource pool modular devices allow for.
But my intention was not to further reinforce the arbitrary distinction between consoles and PC. All game systems are PCs and software is simply coded for certain structures. But I started getting flamed and gave up on trying to argue and resorted to being snarky.
@waahahah: but to depend on the premise that a developer's expertise can result in greater performance/fidelity requires assuming that they cannot or will not do exercise the same with respect to the greater resource pool modular devices allow for.
But my intention was not to further reinforce the arbitrary distinction between consoles and PC. All game systems are PCs and software is simply coded for certain structures. But I started getting flamed and gave up on trying to argue and resorted to being snarky.
The thing is, you can't design around a bottleneck without designing for inferior data flow. There is now magic dev talent that can get around an inferior hardware setup. All the raw power isn't going to take away consoles unified memory space. PC's are hindered by the pci-express lane when it comes to gpu compute and will not be able to do as much as consoles as things move forward. This is NOT an arbitrary distinction. Consoles and desktop PC's do have a clear distinction in hardware and with xbox one x direct x command processor there are special bits of hardware that PC's will never have likely.
Lol, dev mentions PS4 Pro is outdated, cows complain "what is the xbox one doing then?"
PS4 - holding back games
PS4 Pro - holding back games
Xbox One - holding back games
Xbox One X - bringing games to the future with current(ish) hardware
The real problem here is Sony's half ass mid-gen upgrade.
@waahahah: nm, you're missing what I'm hitting at and addressing an argument I'm not making.
I don't think your point matters, the point is, is these aren't generic PC's, they share the cpu/gpu in common but actually are different architectures. Where the pool of raw resources will always be better on PCs, consoles actually have the ability to make games in a different way. Everything PC's can do can be scaled down to fit on consoles so your PC of matching hardware performance will generally be equal with consoles, but the other way is not true. There are things that developers can leverage on consoles without even trying to have more efficiency and better utilization, where an equally equipped PC will have to be scaled down, and as long as the bus is equal on most PC's, generally even a superior PC will have to be scaled down. And GPU compute isn't something that developers won't utilize, but passing data between the cpu and gpu is basically free on consoles.
Also note that the mass of PC's generally aren't that good, most are equivalent or slightly better/worse than consoles. The added resources can't be used to do anything fundamentally different than what consoles can/can't do.
Lol, dev mentions PS4 Pro is outdated, cows complain "what is the xbox one doing then?"
PS4 - holding back games
PS4 Pro - holding back games
Xbox One - holding back games
Xbox One X - bringing games to the future with current(ish) hardware
The real problem here is Sony's half ass mid-gen upgrade.
MS guided X1X's design for existing 3D engines, hence improving AMD GPU's Pixel Engine path i.e. 60 deep graphics pipeline changes. The keywords are "graphics pipeline".
-----------
PS4 Pro was designed with PSVR (two 1920x1080 screens) and narrow optimizations path i.e. PS4 Pro was designed for extreme Sony's CELL's SPU style workload with Compute Engine to L2 cache path, but designing for compute shader tile for GPU's 2MB L2 cache breaks PS4 which is missing 2MB L2 cache feature. Sony's Vega NCU selection shows this extreme SPU style workload with Compute Engine to L2 cache path.
On XBO and X1X, XBO's ESRAM can fake X1X's GPU 2MB L2 cache with lower tiling performance.
Read http://www.playstationlifestyle.net/2017/03/10/horizon-zero-dawn-ps4-pro-utilization/
The machine has a lot of power, so we understand what it can do, but the game was already in place when we learnt about the Pro and got the dev kit. So what we’ve done is taken the power and tried to make the best improvements we think for our existing game. We didn’t design the game from the ground up for the Pro, but we tried to use that processing power to do the best things we could to make the experience look or feel better for you.
I don’t think it’s fair to say we haven’t used it fully, but I do think it’s fair to say that if we’d designed the game from the ground up for the Pro, we’d have probably used it differently.
Horizon Zero Dawn developers wasn't ready for PS4 Pro's unexpected features. PS4 Pro's 3D engines needs to be redesigned for heavy tile compute to 2MB L2 cache designs and heavy PRT (faking larger memory storage with tiled textures) usage.
PS4 Pro was designed for Sony's needs, while MS has heavy 3rd party 3D engine needs e.g. Unreal Engine 4 for CrackDown 3, Sea of Thieves, Gears of War 4 and State of Decay 2. MS's heavy Unreal Engine 4 usage and related hardware optimizations leads to other benefits for other NVIDIA Gameworks games.
Sony picked subset Vega IP blocks that suites their needs.
MS picked subset Vega IP blocks that suites their needs.
PS4 Pro's double rate FP16 enables it to reach ~70 percent of my GTX 980 TI results for Mantis Burn Racing. Don't underestimate PS4 Pro since there's a very narrow software optimizations path for extracting PS4 Pro's TFLOPS power.
PS4 Pro devs has to heavy tile thier math problems and can't rely on easy unified memory bus with large storage programming model. If tiling with XBO's 30 MB ESRAM was hard, try tiling with GPU's 2 MB L2 cache.
@waahahah: Your arument is confined in the preconceptions I am disputing and you're addressing something I am not. But regardless, I disagree with the assumptions you are making within those preconceptions, attempting to equate differing architectures by evaluating comparable hardware within each structure. I am not disputing the efficiency of unified memory architecture, but the model of being restricted to static hardware when the other model is not and can compensate for its architectural disadvantages by other means.
I have no desire to carry on this back and forth since you'd rather dismiss my contributions than address them and bullheadedly assert your correctness. I wish you well.
@waahahah: Your arument is confined in the preconceptions I am disputing and you're addressing something I am not. But regardless, I disagree with the assumptions you are making within those preconceptions, attempting to equate differing architectures by evaluating comparable hardware within each structure. I am not disputing the efficiency of unified memory architecture, but the model of being restricted to static hardware when the other model is not and can compensate for its architectural disadvantages by other means.
I have no desire to carry on this back and forth since you'd rather dismiss my contributions than address them and bullheadedly assert your correctness. I wish you well.
I addressed your issues, to me it sounds like you think the raw resources can over come everything. So your assertion that all console games would perform better on PC is not true given the architecture differences with unified memory architecture. You tried to dismiss this difference as "arbitrary"
You'll always get better resolution and textures but at the end of the day the unified memory architecture allows developers utilize GPU compute much heavier than PC will ever be able to do until it moves into a unified memory space. The pci-express bus becomes too much of a bottle neck. And consoles have been moving more and more towards HSA, distinguishing themselves from PC architecture even more.
I always ask myself "Do these people actually know what they're talking about?"
waahahah's view point is OK.
It's the latency with CPU -->CPU memory --(PCI-E)--> GPU memory ---> GPU ----> GPU memory --(PCI-E)--> CPU memory ---> CPU process.
On consoles, CPU -->memory --> GPU ----> memory ---> CPU process.
To compensate, Intel AVX v2 gains GPU style gather instructions.
Intel Haswell quad-core with a 4GHz Turbo mode will offer 512 GFLOPS FP32 via AVX_v2-256
Intel Skylake X quad-core with a 4GHz Turbo mode will offer 1 TFLOPS FP32 via AVX_v3-512
Intel Skylake X eight-core with a 4GHz Turbo mode will offer 2 TFLOPS FP32 via AVX_v3-512
The problem with XBO is the lack of spare GCN CUs for physics work.
If X1X's 10 CU (1.5 TFLOPS at 1172 Mhz ) from 40 CU resource was allocated for physics work, XBO can't run this game.
If PS4 Pro 9 CU (1 TFLOPS at 914 Mhz ) from 36 CU resource was allocated for physics work, PS4's render is gimped.
Another problem....
If PS4 is dumped, PS4 Pro's maximum software optimizations involves compute shader tile to GPU 2MB L2 cache and PRT to fake a large texture memory storage. PS4 Pro hardware profile games wouldn't work on PS4.
On NVIDIA Maxwell/Pascal, compute/shader/render tile automatically changes size according to hardware's GPU L2 cache size.
Atm, the delay for RX-Vega is mostly due to drivers with brand new tile render to L2 cache feature.
Yah the one thing I'm not saying is that these systems probably can't budget much for compute, but future generations will likely be held back by PC's. An nvidia engineer on reddit was commenting about the unified memory space and just how much pci-e hurts compute on games. Granted at some point PC's might move to a unified architecture, even disk space is getting tied directly to the memory bus, you can get ssd's that insert in memory slots.
@waahahah: Your arument is confined in the preconceptions I am disputing and you're addressing something I am not. But regardless, I disagree with the assumptions you are making within those preconceptions, attempting to equate differing architectures by evaluating comparable hardware within each structure. I am not disputing the efficiency of unified memory architecture, but the model of being restricted to static hardware when the other model is not and can compensate for its architectural disadvantages by other means.
I have no desire to carry on this back and forth since you'd rather dismiss my contributions than address them and bullheadedly assert your correctness. I wish you well.
I addressed your issues, to me it sounds like you think the raw resources can over come everything. So your assertion that all console games would perform better on PC is not true given the architecture differences with unified memory architecture. You tried to dismiss this difference as "arbitrary"
You'll always get better resolution and textures but at the end of the day the unified memory architecture allows developers utilize GPU compute much heavier than PC will ever be able to do until it moves into a unified memory space. The pci-express bus becomes too much of a bottle neck. And consoles have been moving more and more towards HSA, distinguishing themselves from PC architecture even more.
I always ask myself "Do these people actually know what they're talking about?"
waahahah's view point is OK.
It's the latency with CPU -->CPU memory --(PCI-E)--> GPU memory ---> GPU ----> GPU memory --(PCI-E)--> CPU memory ---> CPU process.
On consoles, CPU -->memory --> GPU ----> memory ---> CPU process.
To compensate, Intel AVX v2 gains GPU style gather instructions.
Intel Haswell quad-core with a 4GHz Turbo mode will offer 512 GFLOPS FP32 via AVX_v2-256
Intel Skylake X quad-core with a 4GHz Turbo mode will offer 1 TFLOPS FP32 via AVX_v3-512
Intel Skylake X eight-core with a 4GHz Turbo mode will offer 2 TFLOPS FP32 via AVX_v3-512
The problem with XBO is the lack of spare GCN CUs for physics work.
If X1X's 10 CU (1.5 TFLOPS at 1172 Mhz ) from 40 CU resource was allocated for physics work, XBO can't run this game.
If PS4 Pro 9 CU (1 TFLOPS at 914 Mhz ) from 36 CU resource was allocated for physics work, PS4's render is gimped.
Another problem....
If PS4 is dumped, PS4 Pro's maximum software optimizations involves compute shader tile to GPU 2MB L2 cache and PRT to fake a large texture memory storage. PS4 Pro hardware profile games wouldn't work on PS4.
On NVIDIA Maxwell/Pascal, compute/shader/render tile automatically changes size according to hardware's GPU L2 cache size.
Atm, the delay for RX-Vega is mostly due to drivers with brand new tile render to L2 cache feature.
Yah the one thing I'm not saying is that these systems probably can't budget much for compute, but future generations will likely be held back by PC's. An nvidia engineer on reddit was commenting about the unified memory space and just how much pci-e hurts compute on games. Granted at some point PC's might move to a unified architecture, even disk space is getting tied directly to the memory bus, you can get ssd's that insert in memory slots.
There's the incoming PCI-E version 4.0 and NVIDIA's POV with NVlink vs current PCI-E version 3.0.
There's the incoming PCI-E version 4.0 and NVIDIA's POV with NVlink vs current PCI-E version 3.0.
Right, but is it still as good as free? Which is what the nvidia developer was alluding too. When you have no bus to shuffle data on you can do a lot more during a frame time.
@phantomfire335: Compared to full open world games? Are you kidding?
You didn't say open world games before, you just said it was top 10, which it aint. If you're talking open world games then it may squeeze in.
(Witcher 3 and AC Unity look MUCH better)
@phantomfire335: Errr no they dont. And you could count in the top 10 regardless. Especially with all thats going on.
What is it with you guys and "meltdowns"
Apparently, whenever someone posts a response to any other users post here on this board, they are having a "meltdown"!
Oopse :( I just replied to your post, you know what that means?
There have been users that have actual "meltdowns" where they post long tirades
But it's now been diluted to anytime fanboys disagree with each other
Did this guy see the Anthem demo? Or that Uncharted expansion? Or Days Gone render nearly 100 zombies on screen at once? Or the tech demo of Beyond Good and Evil 2 where they rendered entire planets at once with no load times between them? Or the GoW 4 trailer?
This developer really thinks what's holding us back are GPU's? What???
Classic trick that's been around for at least 15 years: make so-so multiplatform games and when people unfavorably compare them to vastly superior games, blame the consoles both of your games are on for being underpowered and ruining the entire industry.
Hey, don't look at me, it's obviously Sony's fault (please buy our games).
There's the incoming PCI-E version 4.0 and NVIDIA's POV with NVlink vs current PCI-E version 3.0.
Right, but is it still as good as free? Which is what the nvidia developer was alluding too. When you have no bus to shuffle data on you can do a lot more during a frame time.
I agree. A proper fusion example would be powered by ZEN and NAVI 11 based APU.. Not factoring any AMD server ZEN + Greenland monster APUs.
@pimphand_gamer: What's a GTX 880?
https://www.techpowerup.com/199750/nvidia-geforce-gtx-880-detailed
- 20 nm GM204 silicon
- 7.9 billion transistors
- 3,200 CUDA cores
- 200 TMUs
- 32 ROPs
- 5.7 TFLOP/s single-precision floating-point throughput
- 256-bit wide GDDR5 memory interface
- 4 GB standard memory amount
- 238 GB/s memory bandwidth
- Clock speeds of 900 MHz core, 950 MHz GPU Boost, 7.40 GHz memory
- 230W board power
Consoles have always held PC games back, but that's not new. Good looking, fun games can still be had on systems. The Switch is coming out with gorgeous looking games, and the PS4 pro is even more powerful then that. it's just annoying tweaking developers need to do to get games to work on console. more work means more money.
@pimphand_gamer: What's a GTX 880?
https://www.techpowerup.com/199750/nvidia-geforce-gtx-880-detailed
- 20 nm GM204 silicon
- 7.9 billion transistors
- 3,200 CUDA cores
- 200 TMUs
- 32 ROPs
- 5.7 TFLOP/s single-precision floating-point throughput
- 256-bit wide GDDR5 memory interface
- 4 GB standard memory amount
- 238 GB/s memory bandwidth
- Clock speeds of 900 MHz core, 950 MHz GPU Boost, 7.40 GHz memory
- 230W board power
Real GM204 silicon refers to GTX 980 and GTX 970.
https://www.techpowerup.com/gpudb/2621/geforce-gtx-980
Shading Units: 2048. That's 2048 CUDA cores with 256 bit wide GDDR5 memory interface.
My GTX 980 Ti refers to GM200 silicon with 2816 CUDA cores and 384 bit wide GDDR5 memory interface. https://www.techpowerup.com/gpudb/2724/geforce-gtx-980-ti
Please Log In to post.
Log in to comment