[QUOTE="ronvalencia"][QUOTE="tormentos"]
""AMD Details hUMA: HSA in Action!""
From the same link.
Keep the denial going,this was from AMD presentation..:lol:
Clear as day,only you in your magical world woudl think that a driver for HSA will actually make GDDR5 and DDR3 in 2 different polls work as hUMA over a PCI-e bus..
savagetwinkie
It doesn't say much. LOL. The 3rd party HSA app on Radeon HD 7800 is a practical example in action.
You can avoid the copy process if you don't write to main memory in the first place.
You allocate the required job to the right processor in the first place.
-------------
The presentation is correct if you write data on the main memory and then transfer it to the GPU.
The presentation is not applicable if you directly write to GPU's main memory.
HSA is pointless if you write to the GPU's memory then... the entire point is sharing a single memory pool so you don't have to. Either way you still have to copy memory over the PCI-e to get it ot the GPU, thus HSA is only really virtual and not really HSA architecture on a discrete GPU. tru HSA only works on an APU because the GPU/CPU share the same memory bus. I'm sure the GPU for 7800 supports HSA currently but the cards/motherboards virtualize the functionality for compatibility over the PCI-eWTF? HSA enables direct access to GPU's memory space by maping the it's memory with the host CPU's virtual address space. This enables 64bit pointers to be use across the linear address space and direct access to the GPU.
The "Either way you still have to copy memory over the PCI-e to get it ot the GPU" statement is not true for all cases i.e. copying data between different memory spaces is NOT necessary.
PC's PCI-E bus can act like PS4's CPU I/O bus.
http://fabricengine.com/2012/07/gpu_computation_technology_preview/
This 3rd party app running on beta HSA on Radeon HD 7800.
The AMD HSA technology platform has the goal of providing a heterogeneous computation platform in which both CPU and GPU cores access and manipulate memory identically. HSA will enable complex data structures with pointer indirection to be shared between the CPU and GPU. Not only will no copying of data between different memory spaces be necessary, but the pointers imbedded in a complex data structure will be usable without change on both CPU and GPU cores.
In collaboration with AMD, the Fabric Engine development team has extended the KL compiler and Fabric Engine Core execution environment to support GPU computation on high-end AMD GPUs. The primary means by which this preliminary work was possible was the availability of an LLVM back end for AMD GPU hardware.
....
The animated scene was run on a workstation with an AMD A10-5800K APU with both integrated graphics and a discrete Radeon HD 7800 card; however, only the discrete card was used for GPU computation and OpenGL rendering for these tests.
You are assuming writing to CPU's memory space is a necessary requirement i.e. your POV breaks the purpose for direct GPU memory access.
You can stay with the current Direct3D and old driver model for in-direct access to GPU's memory space, while Intel (e.g. Instant Access) , AMD (e.g. HSA) and NVIDIA (e.g. DirectGPU) works out a method to have the real DirectX.
DirectX is an oxymoron from having direct access to the hardware.
Log in to comment