I hope this satisfies the usual System Wars myopia about the next frontier of television panels and the wild speculation that next gen consoles may or may not exist in an 8k world.
The fact is, 8k arrives before the next consoles. It is important to support this format to some degree. Not just platform makers throwing out bullet points as some grand conspiracy.
Diminishing returns. Diminishing returns. This would be great for high res photo viewing. Incredibly demanding and pointless for moving images that you can barely pay attention to for the 1/24th or 1/60th of a second.
micro-led has all the benefits of oled perfect blacks without the negatives of image burn-in and color degradation with added benefits of much higher brightness 1000 nits+ needed for hdr.
Dynamic HDR
Widespread adoption of freesync-2
Variable Refresh Rate (VRR) reduces or eliminates lag, stutter and frame tearing for more fluid and better detailed gameplay.
technology like these are much more useful than 8k
Not surprising in the least. You can already buy a Samsung QLED 8KTV from BestBuy and other retailers at $4500.
People thinking Sony is too quick announcing 8K support are just not knowledgeable about the quick advances in TV tech.
That's quite an aggressive price. Definitely coming down to $ 2000 during rare clearances in the US.
I expect 8k to become even more common for midgrade tvs.
Others gotta remember that 8K will eventually be almost non metric for quality just as how we're plagued with crappy-mediocre 4K tvs currently. Check last year's black friday of $ 200 4K 55 inch tvs. You'd be surprised at how majority are surprised with just hearing "4K", they assume it's good quality. This happened to me many times when I get the low and mid performers. "That a 4K tv!? Awesome!", yet their 8+ year old 1080p samsung plasma **** on it as far as overall image goes unless we're sitting very close.
Other metrics are peak/sustained brightness, true contrast, reflection, sub pixel arrangement, viewing angles, uniformity, motion, brightness fluctuation, etc.
I think no one is doubting that 8k will come, it's just that no one is expecting the next consoles to run games at 8k. But after the Pro Sony knows it can get away with that. But I'm sure MS will pull similar BS with their console.
I think no one is doubting that 8k will come, it's just that no one is expecting the next consoles to run games at 8k. But after the Pro Sony knows it can get away with that. But I'm sure MS will pull similar BS with their console.
At best we can probably expect less demanding graphics games to run 8K native/8K CB/8K current secret sauce "next gen AI upscaling" (think of a hypothetical AMD DLSS).
Now heavy graphics AAA games like future Halo 6, God of War 5, etc., nah. I don't think those will internally render at 8K.
Variable Rate Shading is a new, easy to implement rendering technique enabled by Turing GPUs. It increases rendering performance and quality by applying varying amount of processing power to different areas of the image. VRS works by varying the number of pixels that can be processed by a single pixel shader operation. Single pixel shading operations can now be applied to a block of pixels, allowing applications to effectively vary the shading rate in different areas of the screen.
Variable Rate Shading can be used to render more efficiently in VR by rendering to a surface that closely approximates the lens corrected image that is output to the headset display. This avoids rendering many pixels that would otherwise be discarded before the image is output to the VR headset.
However, Andrew Goossen tells us that the GPU supports extensions that allow depth and ID buffers to be efficiently rendered at full native resolution, while colour buffers can be rendered at half resolution with full pixel shader efficiency
Xbox Scorpios GPU includes hardware features that doesn't exist in PC GPUs prior to Turing GPUs, hence old GCN TFLOPS vs resolution comparison is not valid with X1X.
Besides memory bandwidth advantage for X1X, the old RX-580 with 6.1 TFLOPS was being compared to X1X's GPU with a variable rate shading like feature with apparent jump in rendering resolution with native geometry edges from 6 TFLOPS GPU.
X1X's variable rate shading like feature is different from PS4 Pro's checker-board rendering hardware feature. Games like RDR2 shows X1X was able break the old GCN TFLOPS vs resolution scaling i.e. resolution vs TFLOPS scales from XBO, PS4 to PS4 Pro , but X1X was able to break the trend.
I think no one is doubting that 8k will come, it's just that no one is expecting the next consoles to run games at 8k. But after the Pro Sony knows it can get away with that. But I'm sure MS will pull similar BS with their console.
MS was the first to falsely claim true 4k not sony.
Xbox Scorpios GPU includes hardware features that doesn't exist in PC GPUs prior to Turing GPUs, hence old GCN TFLOPS vs resolution comparison is not valid with X1X.
Besides memory bandwidth advantage for X1X, the old RX-580 with 6.1 TFLOPS was being compared to X1X's GPU with a variable rate shading like feature with apparent jump in rendering resolution with native geometry edges from 6 TFLOPS GPU.
X1X's variable rate shading like feature is different from PS4 Pro's checker-board rendering hardware feature. Games like RDR2 shows X1X was able break the old GCN TFLOPS vs resolution scaling i.e. resolution vs TFLOPS scales from XBO, PS4 to PS4 Pro , but X1X was able to break the tread.
Man the RX580 bandwidth is totally for self,the XBO X bandwidth is shared and has a penalty just like the PS4 does have one.
Comparing the xbox one X peak SHARED bandwidth vs the RX580 only GPU bandwidth is stupid as the CPU side runs on DDR4 bandwidth it self without using the video card bandwidth.
The RX580 can beat the xbox one X in several games so does the 1060 as well.
@vfighter: same. I might go 1440p in a year or two
Edit - P.S., people are flat out delusional if they think they will be gaming in 8k in the PS5. Supporting the format means it can do things like display a picture or do 8k movie playback. Do you have any concept of how resource intensive RENDERING a game at that res will be?
Xbox Scorpios GPU includes hardware features that doesn't exist in PC GPUs prior to Turing GPUs, hence old GCN TFLOPS vs resolution comparison is not valid with X1X.
Besides memory bandwidth advantage for X1X, the old RX-580 with 6.1 TFLOPS was being compared to X1X's GPU with a variable rate shading like feature with apparent jump in rendering resolution with native geometry edges from 6 TFLOPS GPU.
X1X's variable rate shading like feature is different from PS4 Pro's checker-board rendering hardware feature. Games like RDR2 shows X1X was able break the old GCN TFLOPS vs resolution scaling i.e. resolution vs TFLOPS scales from XBO, PS4 to PS4 Pro , but X1X was able to break the tread.
Man the RX580 bandwidth is totally for self,the XBO X bandwidth is shared and has a penalty just like the PS4 does have one.
Comparing the xbox one X peak SHARED bandwidth vs the RX580 only GPU bandwidth is stupid as the CPU side runs on DDR4 bandwidth it self without using the video card bandwidth.
The RX580 can beat the xbox one X in several games so does the 1060 as well.
1. X1X CPU's memory bandwidth usage is bound by the lowest common denominator which is PS4 which is about 10 GB/s one direction from CPU to the GPU. Both PS4 Pro and X1X has similar target frame rates, hence CPU's geometry control points workload is similar for both boxes.
2. XBO DirectX resource management overhead is lower than PS4! and PC's DirectX12 (EA DICE claims). Less CPU load on XBO when compared to PC's DirectX12.
3. X1X CPU doesn't handle Direct3D12 API to GPU ISA translation since it's handling on GPU's micro-coding Direct3D12 engine. This is different on the PC since PC GPUs doesn't directly consume DirectX12 API calls. Less CPU load on XBO when compared to PC.
4. X1X's variable rate shading feature is not automatic i.e. programmers needs to deeply code for it.
5. AMD sponsored game like FarCry 5 shows X1X's superiority over RX-580 and GTX 1060. Forza Horizon 4 has downgraded alpha rain effects when compared to Forza Motorsport 7's version. MS's demonstration with FM7's wet track was intentional i.e. proving memory bandwidth superiority over RX-580 with heavy alpha effects. Your X1X's shared memory being inferior to RX-580 argument is debunked!
6. X1X GPU can be bound by CPU's power when GPU power is available e.g. problematic 1440p resolution with 60 fps target while 4K at 30 fps. X1X was specifically designed for digital foundry's XBO resolution gate!
7. X1X GPU's 2 MB render cache reduce hit rates to external memory. Missing on RX-580. Micro-tile cache render is usually a manual process on AMD GPUs i.e. programmers needs to deeply code for it.
At 4K, RX 580 vs GTX 1070 is only 4 to 5 fps difference which is small enough for X1X GPU to jump over RX-580.
And here I am totally happy with my 1080 display with zero reason to buy anything higher at the moment.
@uninspiredcup said:
0.2% difference for 12X the price.
@ezekiel43 said:
Diminishing returns. Diminishing returns. This would be great for high res photo viewing. Incredibly demanding and pointless for moving images that you can barely pay attention to for the 1/24th or 1/60th of a second.
I don't understand why would anyone buy 8K when there's virtually no content even apart from the diminishing returns. And higher res without higher frames results in jarring experience.
I think no one is doubting that 8k will come, it's just that no one is expecting the next consoles to run games at 8k. But after the Pro Sony knows it can get away with that. But I'm sure MS will pull similar BS with their console.
MS was the first to falsely claim true 4k not sony.
@tormentos: The Xbox runs games like AC, RDR2, Far Cry 5, Metro and many others in true 4k, so I'll assume you're just joking when trying to make an equivalence between the Pro and the X.
@vfighter: same. I might go 1440p in a year or two
Edit - P.S., people are flat out delusional if they think they will be gaming in 8k in the PS5. Supporting the format means it can do things like display a picture or do 8k movie playback. Do you have any concept of how resource intensive RENDERING a game at that res will be?
Moving from 1080p to 1440p several years ago was the best thing happen to me and I love gaming in 1440p, I'm not all that interest in 4K. And so with that, 8K means nothing to me nor do I care about raw graphics like that. You'll love 1440p Xantufrog, it's the perfect balance between high resolution & better framerates in my opinion right in the middle.
Tech demo is far away from an actual game, did they actually confirm the game was coming to PS5 at 8k 120fps??
Racing games have the best chance (Forza at 4k for example) but 8k 120fps, I'm not sure about that.
I don't think the whole 8 and 120FPS were not mix in the same sentence.
It was 4k 120FPS and i am 100% sure that is for VR not normal games.
@ronvalencia said:
1. X1X CPU's memory bandwidth usage is bound by the lowest common denominator which is PS4 which is about 10 GB/s one direction from CPU to the GPU. Both PS4 Pro and X1X has similar target frame rates, hence CPU's geometry control points workload is similar for both boxes.
NO this is something pulled from your ass as always,just like you wanted to claim FP16 for xbox one X just because the Pro had it.
I have tell you 10 times than Bandwidth comparison you make to help the xbox one X case are bad,because you insist in comparing a GPU from PC which doesn't share its bandwidth with CPU vs the xbox one X which is shared.
So you claim 300GB/s+ bandwidth for the xbox one X and claim the RX580 is cripple because it has 256GB/s when in reality the xbox 300+ are shared and the RX580 are not.
Want to make a more down to earth comparison be fair.
256GB/s for the RX580 + what you get from DDR4 separate bandwidth.
So 50GB/s from DDR4 + 256GB/s from the RX 580 and you don't get the loss you get on APU were the CPU eat bandwidth disproportionally.
So again stop comparing the RX580 bandwidth with the xbox one X the XBO X can't use 100% of its bandwidth for GPU and has a nice penalty from the APU design.
Just bought a Samsung Q90R 4K which In itself is slightly to early as currently only a few Netflix shows and iTunes offers anything in 4K... doubt 8k is going to be needed for a very long time..
@davillain-: yeah I think it will be great too. Now that I have a 1440p capable machine, it seems a waste not to have the monitor to do it (although I've been supersampling in some games)
Just bought a Samsung Q90R 4K which In itself is slightly to early as currently only a few Netflix shows and iTunes offers anything in 4K... doubt 8k is going to be needed for a very long time..
netflix 4k isnt really 4k either, compare the quality of (real) 4k blu-ray vs 4k flix stream...
Tech demo is far away from an actual game, did they actually confirm the game was coming to PS5 at 8k 120fps??
Racing games have the best chance (Forza at 4k for example) but 8k 120fps, I'm not sure about that.
I don't think the whole 8 and 120FPS were not mix in the same sentence.
It was 4k 120FPS and i am 100% sure that is for VR not normal games.
@ronvalencia said:
1. X1X CPU's memory bandwidth usage is bound by the lowest common denominator which is PS4 which is about 10 GB/s one direction from CPU to the GPU. Both PS4 Pro and X1X has similar target frame rates, hence CPU's geometry control points workload is similar for both boxes.
NO this is something pulled from your ass as always,just like you wanted to claim FP16 for xbox one X just because the Pro had it.
I have tell you 10 times than Bandwidth comparison you make to help the xbox one X case are bad,because you insist in comparing a GPU from PC which doesn't share its bandwidth with CPU vs the xbox one X which is shared.
So you claim 300GB/s+ bandwidth for the xbox one X and claim the RX580 is cripple because it has 256GB/s when in reality the xbox 300+ are shared and the RX580 are not.
Want to make a more down to earth comparison be fair.
256GB/s for the RX580 + what you get from DDR4 separate bandwidth.
So 50GB/s from DDR4 + 256GB/s from the RX 580 and you don't get the loss you get on APU were the CPU eat bandwidth disproportionally.
So again stop comparing the RX580 bandwidth with the xbox one X the XBO X can't use 100% of its bandwidth for GPU and has a nice penalty from the APU design.
It was 8k 120 fps, what wasn't mentioned was that is was on the PS5, people just jumped to conclusions.
I'm confused... You do know 8K has been around for a while now?... and they aren't that much more than a 4K TV, there is just no reason to get one and there won't be a reason when the "next generation" console come out either.
I mean most 4K movie content even today is still 2K upscaled, we have yet to have the movie industry to fully adopt 4K as most even when filmed on 6K camera's master the movies at 2K and then the 4K image you see is still 2K upscaled to 4K that's filmed on a 6K camera. Not many actually master at native 4K.
Also even if 8K does kick off in the next 1-3 years chances are movies will just upscale 4-6K master content to 8K or worse 2k to 8k.
Gaming though.... 8K is not happening, not on the coming hardware even if its technically possible on some games developers won't have a reason to unless Sony/Microsoft pay them to for exclusives just to tick a box and have new thing to hype.
Tech demo is far away from an actual game, did they actually confirm the game was coming to PS5 at 8k 120fps??
Racing games have the best chance (Forza at 4k for example) but 8k 120fps, I'm not sure about that.
I don't think the whole 8 and 120FPS were not mix in the same sentence.
It was 4k 120FPS and i am 100% sure that is for VR not normal games.
@ronvalencia said:
1. X1X CPU's memory bandwidth usage is bound by the lowest common denominator which is PS4 which is about 10 GB/s one direction from CPU to the GPU. Both PS4 Pro and X1X has similar target frame rates, hence CPU's geometry control points workload is similar for both boxes.
NO this is something pulled from your ass as always,just like you wanted to claim FP16 for xbox one X just because the Pro had it.
I have tell you 10 times than Bandwidth comparison you make to help the xbox one X case are bad,because you insist in comparing a GPU from PC which doesn't share its bandwidth with CPU vs the xbox one X which is shared.
So you claim 300GB/s+ bandwidth for the xbox one X and claim the RX580 is cripple because it has 256GB/s when in reality the xbox 300+ are shared and the RX580 are not.
Want to make a more down to earth comparison be fair.
256GB/s for the RX580 + what you get from DDR4 separate bandwidth.
So 50GB/s from DDR4 + 256GB/s from the RX 580 and you don't get the loss you get on APU were the CPU eat bandwidth disproportionally.
So again stop comparing the RX580 bandwidth with the xbox one X the XBO X can't use 100% of its bandwidth for GPU and has a nice penalty from the APU design.
Try playing a real console ported game with 8 GB single channelDDR3-1600 (~12.8 GB/s) with GTX 1080 Ti or R9-390X 8GB. You will find game console's 30 fps and 60 fps are an easy target to hit. Game console's CPU workload didn't change since year 2013. My old Intel Core i7-2600 era would do the job.
Don't expect 120 to 144 fps which is exclusive to gaming PC. Game console's CPU workload is a waste on high end gaming PCs.
Your benchmark doesn't reflect real game's memory bandwidth usage and doesn't factor in L2 cache programming boundary optimizations.
Read http://www.redgamingtech.com/ps4-architecture-naughty-dog-sinfo-analysis-technical-breakdown-part-2/ from Sony's Jaguar CPU optimization guide.
https://medium.com/software-design/why-software-developers-should-care-about-cpu-caches-8da04355bb8a Modern x86 CPUs with on-chip cache optimization guide.
Your argument shows you don't have commercial x86 programming background.
CELL's SPU local memory storage programming is NOT new. The difference is X86 CPU's cache overflow is more forgiving and it doesn't trigger exception error during overflow!
PS4 CPU is the lowest common denominator and CPU geometry control workloads are programmed with this specs. Modern PC exclusive RTS games may exceed the little Jaguar CPU cache size, but it's not game console's game.
You will see game simulation difference when PS5 sets "8 core Zen v2" CPU workload. This is when PC master race's high end rigs with 8 FAT CPU cores with 16 threads or greater comes into play and we get value from high end PC rigs instead of playing the same game console ported game at higher frame rates or/and higher geometry detail (before GPU's tessellated geometry amplification tricks).
GPU's tessellated geometry amplification = reduce CPU load on geometry control workload.
For example, Digital Foundry testing a 1st generation Ryzen with 4 cores and 8 threads at 3 Ghz.
https://www.youtube.com/watch?v=LjjRdrVAHCQ
"Witcher 3 based on current gen constraints targetting 30 hz on consoles.... in general gameplay on the open world, our Ryzen candidate moves north of one hundred frames per second: a three to four X improvement even before we factor in processor specific optimizations in a fixed box like a console.
Witcher 3's base console target is at 900p/1080p at 30 FPS while Ryzen drives the same 1080p game at ~122 fpsfor the CPU test.
1st gen Ryzen 4C/8T at 3Ghz can pump out 4X CPU geometry control throughput when compared XBO/PS4 on top of PC driver's Direct3D API to GPU ISA translation workload.
Log in to comment