For example, let's consider a typical user who tried out OnLive at the Game Developer Conference and played for five minutes (say, navigating the user interface, playing a few games, snapping and watching Brag Clips™, and spectating other users playing). Given that range of activities, the user easily used more than a dozen servers at different times. Just to identify a few: some OnLive servers ran (i.e. "hosted") particular games, other OnLive servers hosted the user interface, and others handled the distribution of spectating video streams and Brag Clips.
As the user transitioned from one experience to another (e.g. clicking in the user interface to start a game), OnLive would "hand off" the user from one server to another, transferring the "user state" (e.g. all the data unique to that user, including the live characteristics of the Internet connection) from the user interface server to the game server, while switching the live compressed HDTV video/audio from the user interface server to the game server. And, all of this occurred on a video frame boundary. So, from the point of view of the user, it seemed like the video was just continuing onward from the user interface video to the game video as if it was running on the same server. In actuality, it was seamlessly handed-off from one server to the next.
http://blog.onlive.com/2009/05/12/hopping-through-cloud-onlive/
You think I'm lying? :lol:
Stop trying to lecture me on how the internet works when you don't even know how OnLive works.
Log in to comment