OMG no! These two methods seem like the worse possible way to handle this process.
In the Kahawai method, it essential renders a low rez game and overlays it with a high rez layer rendered on a remote server. This is fine for mobile games, which it seems to be aiming for, but falls flat with the level of interactivity and physics we would expect from next gen games. When we are talking modern games we have high complexity models which dynamic physics. Here's a good article on garbage hitboxes and why they ruin gameplay. A good hitbox will match the character poly model as closely as possible, unless for some specific design purpose. Objects will (should) interact in accordance with what you see on screen.
We already have that same complaint in recent Farcry comparisons where the environments and physics are garbage.
https://www.dsogaming.com/articles/far-cry-2-features-more-advanced-physics-than-far-cry-5-despite-being-released-10-years-ago/
Putting a mobile game with a HD wrapper will only exasperate the problem:
Now Outatime's solution seems to understand that attempting to split development or reduce the complexity of a game and throw lipstick on a pig isn't the best game development model (although viable in today's market.) Rather, their solution is to keep the tradition means of development, but replicate possible next frame scenarios multiple times in the cloud and present the correct frame based on user input. While recognizing you cannot necessarily split the processing for gameplay, the resources it would require to render games would be whatever the system would present X4. One for each of what it claims would be the for possible predicted frames. It does address incorrect predictions, but it seems like that would be a problem in particular manic games and in effective.
My opinion on split and local processing:
The best uses of split/cloud processing (IMO) would be to utilize a high performance system for rendering high level physics and calculations based on the proximity of objects to the user.
I.e. fully rendered user player and most of the scene surrounding that user. Rasterization could be used on dynamic objects in close proximity and I would say cloud based ray-tracing would be more suited for distant and non-dynamic elements. MS seemed to try to do that with the Xbox but it didn't appear that their tools were fleshed out enough to make it viable.
Here's a good intel article on cloud ray-tracing that they showed off over 10 years ago:
https://software.intel.com/sites/default/files/m/d/4/1/d/8/Cloud-based_Ray_Tracing_0211.pdf
https://software.intel.com/en-us/articles/tracing-rays-through-the-cloud/
Log in to comment