michaelmikado's forum posts

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#1 michaelmikado
Member since 2019 • 406 Posts

@ronvalencia said:

P

@michaelmikado said:

@ronvalencia:

No the video is wrong, what he is suggesting is literally impossible the GPU and GPU cannot be on two separate packages while the CPU utilizes vRAM from the GPU. This configuration is only feasible when the cpu and GPU are single packages and bus with access to the same memory controller.

Edit: to be fair later in article they claim that the HBM2 is shared which contradicts what they claimed earlier but still state that they are separate packages. It sounds like there’s a mix of marketing and technical information. So will wait for more info but the fact that they contradict themselves on whether the HBM2 is just vRAM or not would be a basic spec to get.

FALSE.

The original Xbox 360 has separate CPU and GPU/NB/MCH packages with unified GDDR3 memory architecture.

Separate CPU package is connected to GPU package which is then connected to unified GDDR3 memory.

PC's CPU can access GPU's VRAM via PCI-E links. 1990s PCI protocols still has server RAM expansion cards via PCI expansion slots! PCI-E still runs with PCI protocols.

PC's Windows NT/HAL wasn't designed to register memory pools in GPU's VRAM as system memory. Linux's flexibility doesn't have Windows NT's rigidity.

The 360 having separate packages was only possible by having direct access to each other's cache and aren't required to go through a memory bus. While I can't speak on Linux, unless they've replicated the xbox 360 cache scheme at the server level which would arguably be a larger accomplishment. It makes no sense from a performance stance and even if they did, the cost would be prohibitive for that kind of customization when there are many off the shelf options. Ignoring the fact that the cache differences don't mean they are the same chips they could have done any number of configurations to the cache as stated.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#2 michaelmikado
Member since 2019 • 406 Posts

6 months.

PSNow is on Mac and PC. Remote Play is on android.

Predicting Sony will release a new higher tier of PSnow that will include PS5 games on all platforms including PS4 possibly before PS5 even releases.

At the very least PS4 owners will be able to demo PS5 games on their PS4s

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#3  Edited By michaelmikado
Member since 2019 • 406 Posts

@ronvalencia:

No the video is wrong, what he is suggesting is literally impossible the GPU and GPU cannot be on two separate packages while the CPU utilizes vRAM from the GPU. This configuration is only feasible when the cpu and GPU are single packages and bus with access to the same memory controller.

Edit: to be fair later in article they claim that the HBM2 is shared which contradicts what they claimed earlier but still state that they are separate packages. It sounds like there’s a mix of marketing and technical information. So will wait for more info but the fact that they contradict themselves on whether the HBM2 is just vRAM or not would be a basic spec to get.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#4 michaelmikado
Member since 2019 • 406 Posts

@ronvalencia:

https://www.guru3d.com/news-story/amd-radeon-gpus-tapped-for-google-stadia-gamestreaming-platform.html

From AMD PR

Custom AMD high-performance Radeon datacenter GPUs for Google Stadia include:

Yes, the exact GPU I've been talking about for months now.

https://www.amd.com/en/products/professional-graphics/radeon-pro-v340

Radeon Pro v340

Gee, I wonder how I knew these gpus were getting rolled out in datacenters for games streaming months in advance.....?

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#5  Edited By michaelmikado
Member since 2019 • 406 Posts

@ronvalencia said:
@michaelmikado said:
@ronvalencia said:
@michaelmikado said:

Nah this is known. The full hardware was announced in October by AMD.

https://community.amd.com/community/radeon-pro-graphics/blog/2018/11/13/amd-server-cpus-gpus-the-ultimate-virtualization-solution

Anyone who is doing game streaming will be using these servers except for MS.

Footnote from the link.

Estimates based on SPECfp®_rate_base2017 using the GCC-02 v7.2 compiler. AMD- based system scored 201 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7601 SOC’s ($4200 each at AMD 1 ku pricing), 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel- based Supermicro SYS-1029U-TRTP server scored 164 in tests conducted by AMD, configured with 2 x 8160 CPU’s (2 x $4702 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting. NAP-77

CPU Specs: 7601 Epyc

# of CPU Cores32

# of Threads64

Base Clock2.2GHz

Max Boost Clock 3.2GHz

All Core Boost Speed 2.7Ghz

Socket Count1P/2P

PCI Express Versionx128

Default TDP / TDP180W

GPU Specs: v340

GPU Architecture VegaLithography 14nm Fin FETStream Processors7168Compute Units 112 = Vega 56x2 GPU Memory

Memory Size 32 GB Memory Type (GPU) HBM2

https://www.amd.com/en/products/cpu/amd-epyc-7601

Epyc 7601 with 32 CPU cores has 16MB L2 cache + 64 MB L3 cache.

Google's version

  • Custom x86 processor clocked at 2.7GHz w/ AVX2 SIMD and 9.5MB of L2+L3 cache <-------- NOT Epyc 7601

Google's Stadia will be powered by the following specs:

  • Custom x86 processor clocked at 2.7GHz w/ AVX2 SIMD and 9.5MB of L2+L3 cache
  • Custom AMD GPU w/ HBM2 memory, 56 compute units, and 10.7TFLOPs
  • 16GB of RAM (shared between CPU and GPU), up to 484GB/s of bandwidth <------- NOT a standard PC architecture
  • SSD cloud storage

Reference: https://www.pcgamer.com/google-stadias-specs-and-latency-revealed/

PCGamer's half-assed reporting which they copied from DigitalFoundry is WRONG

  • 16GB of RAM (shared between CPU and GPU), up to 484GB/s of bandwidth <------- NOT a standard PC architecture

Here's what Digital Foundry ACTUALLY said.

Right now, it's not clear if the 16GB of memory is for the whole system, or for GPU VRAM only

Google is being coy about their hardware. The vCPUs may be some type of special custom virtualization but the the hardware itself is anything but that. What it looks like to that they are running a virtualization layer for instances and splitting the available cache in some weird way to come up with these specs. I can almost guarantee they are essentially meaningless and "semi"-custom at best. Semi-custom generally means readily available hardware configurations with some minor tweets or pairings. My guess is they are purposely being coy because they will be upgrading to Rome Epyc CPUs as soon as they are available.

Yeah that's marketing BS trying to give an approximate per instance virtual allocation. It doesn't mean that is the actual underlying physical hardware. Digital Foundry already correctly guessed they are just running Vega 56s. I've already stated the cards are v340s which are just a pair of Vega 56s duct taped together for use in servers. These specs are for the cameras and those who wouldn't even know anything beyond counting TFlops. They aren't "technically" wrong, but they are marketing nonsense.

This is just the Google version of 3x the power of an XB1 in the cloud.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#6 michaelmikado
Member since 2019 • 406 Posts
@ronvalencia said:
@michaelmikado said:
@ronvalencia said:

@qx0d:

https://wccftech.com/stadia-is-googles-cloud-based-game-platform-powered-by-amd-linux-and-vulkan-due-in-2019/

Google Stadia is powered by AMD technology and custom built for this use case. It surpasses the likes of PlayStation 4 Pro and Xbox One X put together with its 10.7 teraflops of computing power, 56 compute units and HBM2 memory. The CPU is a custom x86 processor clocked at 2.7 GHz which also supports hyperthreading and AVX2. Memory totals 16GB, with up to 484 GB/s of transfer speed, and 9.5 MB of L2 + L3 cache.

Specs decode

GPU: RX Vega 56 with Vega 64's HBM v2 memory bandwidth... Unknown if this Vega 56 is a semi-custom VII with 56 CUs at 10.7 TFLOPS and two HBM v2 stacks. Normal Vega 56 has 10.56 TFLOPS.

CPU: custom ZEN ... AVX2 full hardware from ZEN 2 or half baked version from ZEN v1.x??? Clock speed set at 2.7 Ghz.

Nah this is known. The full hardware was announced in October by AMD.

https://community.amd.com/community/radeon-pro-graphics/blog/2018/11/13/amd-server-cpus-gpus-the-ultimate-virtualization-solution

Anyone who is doing game streaming will be using these servers except for MS.

Footnote from the link.

Estimates based on SPECfp®_rate_base2017 using the GCC-02 v7.2 compiler. AMD- based system scored 201 in tests conducted in AMD labs using an “Ethanol” reference platform configured with 2 x AMD EPYC 7601 SOC’s ($4200 each at AMD 1 ku pricing), 512GB memory (16 x 32GB 2R DDR4 2666MHz), Ubuntu 17.04, BIOS 1002E. Intel- based Supermicro SYS-1029U-TRTP server scored 164 in tests conducted by AMD, configured with 2 x 8160 CPU’s (2 x $4702 each per ark.intel.com), 768GB memory (24 x 32GB 2R DDR4 2666MHz), SLES 12 SP3 4.4.92-6.18-default kernel, BIOS set to Extreme performance setting. NAP-77

CPU Specs: 7601 Epyc

# of CPU Cores32

# of Threads64

Base Clock2.2GHz

Max Boost Clock 3.2GHz

All Core Boost Speed 2.7Ghz

Socket Count1P/2P

PCI Express Versionx128

Default TDP / TDP180W

GPU Specs: v340

GPU Architecture VegaLithography 14nm Fin FETStream Processors7168Compute Units 112 = Vega 56x2 GPU Memory

Memory Size 32 GB Memory Type (GPU) HBM2

https://www.amd.com/en/products/cpu/amd-epyc-7601

Epyc 7601 with 32 CPU cores has 16MB L2 cache + 64 MB L3 cache.

Google's version

  • Custom x86 processor clocked at 2.7GHz w/ AVX2 SIMD and 9.5MB of L2+L3 cache <-------- NOT Epyc 7601

Google's Stadia will be powered by the following specs:

  • Custom x86 processor clocked at 2.7GHz w/ AVX2 SIMD and 9.5MB of L2+L3 cache
  • Custom AMD GPU w/ HBM2 memory, 56 compute units, and 10.7TFLOPs
  • 16GB of RAM (shared between CPU and GPU), up to 484GB/s of bandwidth <------- NOT a standard PC architecture
  • SSD cloud storage

Reference: https://www.pcgamer.com/google-stadias-specs-and-latency-revealed/

PCGamer's half-assed reporting which they copied from DigitalFoundry is WRONG

  • 16GB of RAM (shared between CPU and GPU), up to 484GB/s of bandwidth <------- NOT a standard PC architecture

Here's what Digital Foundry ACTUALLY said.

Right now, it's not clear if the 16GB of memory is for the whole system, or for GPU VRAM only

Google is being coy about their hardware. The vCPUs may be some type of special custom virtualization but the the hardware itself is anything but that. What it looks like to that they are running a virtualization layer for instances and splitting the available cache in some weird way to come up with these specs. I can almost guarantee they are essentially meaningless and "semi"-custom at best. Semi-custom generally means readily available hardware configurations with some minor tweets or pairings. My guess is they are purposely being coy because they will be upgrading to Rome Epyc CPUs as soon as they are available.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#7 michaelmikado
Member since 2019 • 406 Posts

@ronvalencia:

Right, that’s google’s fuzzy math at play. There’s no modern servers with less than 10MB of L1+L2 cache combined its likely some weird division they are doing of total cache and allocating it per instance. Maybe they got a 7601 with slightly less cache from AMD, who knows. The point is that’s basically the chip they are using.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#8 michaelmikado
Member since 2019 • 406 Posts

@Litchie said:
@superfluousreal said:
@Litchie said:

To anyone who has seen it, did they bring up latency in the presentation?

Latency is the ultimate decider, it will never be conquered.

Which is why I'm asking. If Google delibrately didn't mention the most important thing to gamers and games via streaming, Stadia is obviously shit.

No price would be more important

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#9 michaelmikado
Member since 2019 • 406 Posts

@Grey_Eyed_Elf said:
@michaelmikado said:
@Grey_Eyed_Elf said:
@michaelmikado said:

Peak speed is the least reliable metric for measuring preparedness for game streaming. It will be about latency and consistency. Internet speeds are almost a non-factor with the standards speeds today.

While I agree for the most part I have to say that Its a factor if you want the 4K/60 experience... 25-50Mbps that is the average in the UK is not close to being enough and during peak times A LOT of people will be lucky to get 2/3 of that advertised speed.

My brother lives 30 minutes away from me in a apartment complex in Wimbledon and his area can't get more than 63Mbps connection from TalkTalk, he has their 38Mbps connection and can't even get 22Mbps down with speedtest when at peak times when he is home from work... It works alright for 4K netflix but don't think it will be for something like this.

Yeah but this is assuming at home speeds are non-pooled. As long as a user is getting 25Mbps even when peak, the speed should be fine. This service will most likely launch in NA first anyway and then hit Europe and Asia. But that time I would expect speeds to be well above 25Mbps average in home. The service isn't even launching until later this year, never mind when it official launches across the pond.

The 25Mbps number is what is required for 1080/60, so I'm not sure why you are saying its should be fine on a service that is targeting 4K/60.

Because I don't think the average user of this service who is playing on a web browser on a Chromebook is looking for 4K/60. Users looking for 4K/60 will get a physical box to play in the best possible fidelity. The average Netflix user doesn't care that Infinity War is only in 1080P rather than 4K. What I'm saying is for the average user targeted user this is likely more than fine and latency will be more important to their experience than resolution.

Avatar image for michaelmikado
michaelmikado

406

Forum Posts

0

Wiki Points

2

Followers

Reviews: 0

User Lists: 5

#10 michaelmikado
Member since 2019 • 406 Posts

@Grey_Eyed_Elf said:
@michaelmikado said:

Peak speed is the least reliable metric for measuring preparedness for game streaming. It will be about latency and consistency. Internet speeds are almost a non-factor with the standards speeds today.

While I agree for the most part I have to say that Its a factor if you want the 4K/60 experience... 25-50Mbps that is the average in the UK is not close to being enough and during peak times A LOT of people will be lucky to get 2/3 of that advertised speed.

My brother lives 30 minutes away from me in a apartment complex in Wimbledon and his area can't get more than 63Mbps connection from TalkTalk, he has their 38Mbps connection and can't even get 22Mbps down with speedtest when at peak times when he is home from work... It works alright for 4K netflix but don't think it will be for something like this.

Yeah but this is assuming at home speeds are non-pooled. As long as a user is getting 25Mbps even when peak, the speed should be fine. This service will most likely launch in NA first anyway and then hit Europe and Asia. But that time I would expect speeds to be well above 25Mbps average in home. The service isn't even launching until later this year, never mind when it official launches across the pond.