Ugh, this obsession with ray-tracing is killing me. There are far far better techniques to use real-time than pure ray-tracing..
I'm interested in using it for sound.
Pure ray tracing as a whole puts too much load on the client rendering the scene in my opinion. I'm much more excited for path tracing even for sound. It would make more sense to have online games on servers fully path traced and then have the client end de-noise and the render on the client side. Ray-tracing was always a stop gap technology that was never really truly feasible for home users at the rate cloud computing is expanding.
This isn't just a matter of hardware, it's also the software and computational distribution. PSNow is more or less either running fully on the servers or fully locally. MS is addressing these issues by trying to have the local hardware running the controls and "rough" geometry (which should greatly address input lag) and the cloud to do the heavy lifting. This is where MS may be able to flex their muscles as they are a software company and their current bread and butter is not just the azure hardware, but the platform itself(combination of hardware and software).
Yes, I've actually covered this before. The two techniques being discussed are
Outatime:
Outatime renders speculative frames of future possible outcomes, delivering them to the client one entire RTT ahead of time, and recovers quickly from mis-speculations when they occur. Clients perceive little latency. To achieve this, Outatime combines: 1) future state prediction; 2) state approximation with image-based rendering and event time-shifting; 3) fast state checkpoint and rollback; and 4) state compression for bandwidth savings.
(In layman's terms this means they run an instance ahead of the actual instance you are playing and attempt to "guess" what your move will be. This may be more effective for slower games with minimal input but faster games would suffer from higher rates of "Mis-speculations". This is also ignoring the idea of the costs of running two instances of the exact same game simultaneously in the cloud. Here is a technical breakdown of the logic flow:
The other application they have been floating is Kahawai: This attempts to split processing between a client device and cloud server. Unfortunately there are several issues with this approach.
1) The client machine would need to handle all aspects of the game which can be run locally. Example if you are looking to do collision detection then what you are basing that on has to be rendered so you would be able to rely on anything rendered on the cloud to be interactive or high levels of interactivity. See Crackdown 3 as an example of how a "split rendering" gameplay works.
2) If they insisted on split rendering it would mean that the base engine would need to be able to run on the client machine. Remember the pretty graphics are just one portion of a games resources. Complex physics, AI, or interactive elements typically rely heavily on CPU processes which mobile devices generally do not have. This leads to a situation where you either have to downgrade the game engine to play on weaker devices and not have the "pretty" things dramatically affect game play or you are left with option 3.
3) Developing around the limitations. Ignoring everything else, splitting processing would require some amount of development tweaking to ensure the aspects of the game that can run on the local client are not dependent on things which may render late. Let's take a scenario using these two approaches. You are playing a game where you have complex interactions between the environmental destruction, AI, and physics. You elect to render the general gameplay around having very powerful hardware and offloading some of the more taxing CPU and GPU tasks.
Should you experience latency, degraded signal, or temporary loss of connection, elements of your game cease to work as expected, respond slowly or stop working. Effectively creating lag or game breaking bugs.
As a developer you have a couple of ways to address this. If you go full cloud services, then the game itself resides on the server and under the same conditions, it will cause lag and pixelation or temporary loss of the stream. However, the game and gameplay itself remains intact because the game elements still have the resources needed to run, your internet connection simply means you are unable to interact with or view the game effectively.
The third option is to remove those elements altogether or make them completely non-interactive. This combines elements of both schools of thoughts where you create your game with the idea that your hardware available to you is subpar, but you also have additional resources to make the game "prettier" when available. The issue with this thought process is because the resources are no longer guaranteed in a timely fashion you cannot base gameplay around them and effectively causes games to be held back by the lowest denominator.
My personal preference at this time is to have the best game and gameplay possible with the most amount of guaranteed resources. Which is why I lean towards fully rendered cloud game engines rather than splitting processes. I do think there are things which would do better in split scenarios which would not affect game play. Clothing and hair physics. Personally I think that we should consider global illuminated cloud ray-tracing with rasterization for interactive elements in a player immediate areas. But anyway. There is no secret panacea for cloud gaming. It's incredibly complex beyond belief.
My feelings are this are that these techniques would be a bridge so that over time as net latency and other factors are improved that we can move to a more fully cloud based option. It's to me kind of like playing multiplayer games back in the pre broadband era...player prediction algorithms and stuff were able to make the experience passable at the expensive of the player warping to a different spot when the prediction routines were wrong. These techniques also still allow for better client hardware to be leveraged...ie the person using their TV or cheapest available client probably isn't going to care about having the best possible experience, yet for the person who wants more they can purchase a more capable client to improve their experience. If you do everything on the cloud you kind of wreck that option.
I can't disagree with that but I feel these techniques are already outdated. 5G is pushing for end to end latency of no more than 2-5ms. At that speed, these techniques wouldn't be necessary and split rendering technique wouldn't be necessary all. While the latency requirement is for 5G, it would push more of the infrastructure to focus on lower latency or at least priority packet routing for specific applications. The timing is perfect for launching cloud gaming because of what is having at the infrastructure level.
This isn't just a matter of hardware, it's also the software and computational distribution. PSNow is more or less either running fully on the servers or fully locally. MS is addressing these issues by trying to have the local hardware running the controls and "rough" geometry (which should greatly address input lag) and the cloud to do the heavy lifting. This is where MS may be able to flex their muscles as they are a software company and their current bread and butter is not just the azure hardware, but the platform itself(combination of hardware and software).
Yes, I've actually covered this before. The two techniques being discussed are
Outatime:
Outatime renders speculative frames of future possible outcomes, delivering them to the client one entire RTT ahead of time, and recovers quickly from mis-speculations when they occur. Clients perceive little latency. To achieve this, Outatime combines: 1) future state prediction; 2) state approximation with image-based rendering and event time-shifting; 3) fast state checkpoint and rollback; and 4) state compression for bandwidth savings.
(In layman's terms this means they run an instance ahead of the actual instance you are playing and attempt to "guess" what your move will be. This may be more effective for slower games with minimal input but faster games would suffer from higher rates of "Mis-speculations". This is also ignoring the idea of the costs of running two instances of the exact same game simultaneously in the cloud. Here is a technical breakdown of the logic flow:
The other application they have been floating is Kahawai: This attempts to split processing between a client device and cloud server. Unfortunately there are several issues with this approach.
1) The client machine would need to handle all aspects of the game which can be run locally. Example if you are looking to do collision detection then what you are basing that on has to be rendered so you would be able to rely on anything rendered on the cloud to be interactive or high levels of interactivity. See Crackdown 3 as an example of how a "split rendering" gameplay works.
2) If they insisted on split rendering it would mean that the base engine would need to be able to run on the client machine. Remember the pretty graphics are just one portion of a games resources. Complex physics, AI, or interactive elements typically rely heavily on CPU processes which mobile devices generally do not have. This leads to a situation where you either have to downgrade the game engine to play on weaker devices and not have the "pretty" things dramatically affect game play or you are left with option 3.
3) Developing around the limitations. Ignoring everything else, splitting processing would require some amount of development tweaking to ensure the aspects of the game that can run on the local client are not dependent on things which may render late. Let's take a scenario using these two approaches. You are playing a game where you have complex interactions between the environmental destruction, AI, and physics. You elect to render the general gameplay around having very powerful hardware and offloading some of the more taxing CPU and GPU tasks.
Should you experience latency, degraded signal, or temporary loss of connection, elements of your game cease to work as expected, respond slowly or stop working. Effectively creating lag or game breaking bugs.
As a developer you have a couple of ways to address this. If you go full cloud services, then the game itself resides on the server and under the same conditions, it will cause lag and pixelation or temporary loss of the stream. However, the game and gameplay itself remains intact because the game elements still have the resources needed to run, your internet connection simply means you are unable to interact with or view the game effectively.
The third option is to remove those elements altogether or make them completely non-interactive. This combines elements of both schools of thoughts where you create your game with the idea that your hardware available to you is subpar, but you also have additional resources to make the game "prettier" when available. The issue with this thought process is because the resources are no longer guaranteed in a timely fashion you cannot base gameplay around them and effectively causes games to be held back by the lowest denominator.
My personal preference at this time is to have the best game and gameplay possible with the most amount of guaranteed resources. Which is why I lean towards fully rendered cloud game engines rather than splitting processes. I do think there are things which would do better in split scenarios which would not affect game play. Clothing and hair physics. Personally I think that we should consider global illuminated cloud ray-tracing with rasterization for interactive elements in a player immediate areas. But anyway. There is no secret panacea for cloud gaming. It's incredibly complex beyond belief.
@sakaixx: There is no loophole as it was posted on Gamespot therefor only Gamespot's score counts.
ehh, I feel like if you wanted to make something binding it would have tried to close any perceived loopholes. Not that this was a serious thread to begin with.
Bunch of garbage and hence after this reply I'll add you to my ignore list.
Yeah, I have no idea how cloud infrastructure works while me working as a solution architect of a SaaS app hosted on the cloud. I have no idea how to spun up more instances utilizing DOcker and Kubernetes, totally true. /sarcasm
All that bunch of garbage you posted has no bearing on the fact I listed that Sony has scalability problems hence they add users to the queue. DCing this fact by dancing around bringing free, paid or other category of users wouldn't excuse this inability of Sony.
Posting the bunch of strategies that industry uses to combat scalability/load balacing issues isn't the answer that SOny doesn't have the capacity to serve a large no. of users. They can't spun up more VM's containers because they don't have it hence users are added to the queue. Remain butthurt about and keep trying to deflect this fact by bringing irrelevant things to try and draw the attention from the actual fact.
Lastly, I also gone through your video and again it wasn't a technical video but a marketing one addressing the end users by vaguely saying they are making blades with Xbox hardware (which is actually nothing more than made up of PC components) to make the end users think they will be playing the games like on actual Xbox. My marketing department also uses similar tactics to explain our customers how their new user experience with our enterprise on cloud product will be, they don't put the technical/architectural jargon in their pitch to the business users. It's stupid to think that MS will put the literal X1 hardware just to get it obsolete in 2 years time due to the arrival of next-gen systems and making their users unable to play the new games. Keep trying to dance around this fact.
So let's have a final fact check:
FACT 1: Sony have scalability issues hence they add users to the queue until an instance becomes available on server to serve the user.
FACT 2: Sony has a shitty 720p/30 FPS streaming service which is below standard of modern acceptable resolution.
FACT 3: There's no proof that Sony is making money (revenue =/= profits)
FACT 4: SOny wasn't the first, OnLive was which got bankrupted that in turns allowing Sony to purchase it because Sony couldn't have bought a successful company.
FACT 5: OnLive and Gaikai were both streaming using PC technology and PC games.
At this point I'm begging you, begging you, for you own sake and self-interest to ignore my posts and avoid further embarrassment.
You are trying to claim there is a massive scalability problem with PSNow when we have 100s to the contrary. Just like any cloud service, even MS has constant queuing and scaling issues for its Azure network which is down every few days https://azure.microsoft.com/en-us/status/history/, but I in no way say MS Azure network is garbage. Or that Netflix is awful because it's constantly down as well https://outage.report/netflix#2019-01-01. But for me to say something like that would be grossly uninformed..... so I wonder why someone who claims to "know" about cloud services would misunderstand queuing, load, etc. hm.....
As for the technical video, there isn't a single person watching it that didn't read that as putting Xbox One APUs which are custom built. (quite lying about it being PC hardware). The Microsoft reps and every single news outlet were clear on what's going in the box. Cut the lying... You are literally the only who think they aren't putting Xbox hardware in blade servers when every single new outlet and Microsoft themselves is telling you.
You're delusional, you're embarrassing yourself.
"So let's have a final fact check:
"FACT 1: Sony have scalability issues hence they add users to the queue until an instance becomes available on server to serve the user."
ALL cloud services experience queue instances. Azure, Netflix, Google. It's a cloud thing underload when you have more users than anticipated. The queues when they rarely happen are no more than a minute or two as new VMs spin up and become active. Again, someone unfamiliar with cloud services wouldn't understand this concept. It's not a lack of resources, its literally them turning on VMs just like the time it takes for your Xbox to boot up while being completely turned off. Again, you failure to understand that basic concept is the issue. It's not that they do not have enough servers, its that they do not keep server instances running when no one is using them because its a waste of money. Do you keep your TV, lights, and car running 24/7 just in-case you want to use them? When all these cloud services first started it was similar because they did not have a good grasp of user load which levels out and becomes predictable as the service matures. It's also why claims that no one else will have queue issues doesn't make any sense because, without years of user usage data, they wouldn't know their peak times or how many VMs they need to spin up when.... Again you clearly, clearly do not understand cloud infrastructure and you are embarrassing yourself repeatedly. It's literally pitiful.
"FACT 2: Sony has a shitty 720p/30 FPS streaming service which is below standard of modern acceptable resolution."
Sony has been letting users stream games via the internet since 2006, with the PS3 and PSP.... The only games that require streaming, PS3 games don't even have 1080P so what would be the point of that? The PS4 games can be downloaded and played on a PS4 or via remote play at full 1080P. Ignoring the business decisions, Video services like Netflix didn't start with HD streams or 4K either. We know they have the technical capability do it because they allow 1080P streams via remote play which uses much of the same streaming technology, they reasons they choose not to are theirs but I suspect it's because there is no need for anything higher for the bulk of their user base on PS4s when they can just download the game.
"FACT 3: There's no proof that Sony is making money (revenue =/= profits)"
Never said a word about profits. Don't care, no horse in this race. The only reason I bring up revenue is because you keep claiming the service isn't working, good, or no one is using it. The only reason I bring up revenues is that its clear people are paying and subscribing to the services. You have to move goal posts just to score point in a game I'm not even playing with you. It must be difficult losing to yourself and claims of "profits" smacks of utter desperation and panic.
"FACT 4: SOny wasn't the first, OnLive was which got bankrupted that in turns allowing Sony to purchase it because Sony couldn't have bought a successful company."
Wha??? I never said Sony was the first and don't care. I was an Onlive subscriber and I loved the fact that Sony purchased them because they were so forward thinking. I'm not cheerleading for Sony, I'm cheerleading for cloud gaming. I don't care who does it. The fact is Sony is the only console maker who has been supporting playing your games over the internet since 2006. That's a fact.... I don't even know what the point of "allowing Sony to purchase it because Sony couldn't have bought a successful company." Like it doesn't even make sense and sounds like an argument a 13 year old would make.
"FACT 5: OnLive and Gaikai were both streaming using PC technology and PC games."
And.....? I not sure what you are trying to prove I mentioned many times I have used and love cloud gaming services. I never even said Google or MS will suck, because I'm sure they will catch up. But right now They are all behind the 8 ball. On-live and Gaikai were the largest companies out there. Sony snatched them up to kickstart and move their cloud gaming forward because they knew you can't just waltz into that industry. Like your arguments are so desperate and disjointed they aren't making sense.
But seriously.
And please, please, please, please, just ignore me, block me, whatever. It's too painful to continue this lunacy. You're embarrassing yourself repeatedly and making desperate arguments that no one ever made. Help yourself, by stopping this madness. The only reason I'm even responding it to make sure your misinformed rantings aren't taking as actual fact and to help people get real informational instead of random rantings.
@Nuck81: That may be true but that's what the average used car is. If you need a minivan you are paying 30K minimum new. Even decent used minivans are expensive. I'm not really trying to get into the finer points of car buying. What I am saying is that relative to needs and income, cars cost a lot. Not all car buyers can take advantage of getting the cheapest, smallest car when their priorities are safety, reliability, and room for their families.
Log in to comment