Although CPUs and GPUs have coexisted on the same piece of silicon (better known as an APU), for a few years now, the CPU and GPU have not been ‘equal citizens’. Heterogeneous System Architecture (HSA) is an intelligent computing architecture that enables CPU, GPU and other processors to work together in harmony on a single piece of silicon by seamlessly moving the right tasks to the best suited processing element. HSA creates an environment that allows the GPU to be used as fluidly as the CPU.
Enabling high bandwidth access to memory will arguably be important in our quest for unlocking this compute performance. Breaking down bottlenecks in how GPU is accessing the memory is important to the future of programming because it allows apps to efficiently move the right tasks to the best suited processing element. heterogeneous Uniform Memory Access or hUMA signifies the first step in bringing a heterogeneous compute ecosystem to life. hUMA is a highly sophisticated shared memory architecture used in APUs (Accelerated Processing Units). In a hUMA architecture, CPU and GPU (inside APU) have full access to the entire system memory. hUMA architecture means that all processing cores in a true UMA system share a single memory address space. hUMA main features include
- Access to entire system memory space: CPU and GPU processes to dynamically allocate memory from the entire memory space
- Pageable Memory: GPU can take page faults, and is no longer restricted to page locked memory
- Bi-directional coherent memory: Any updates made by one processing element will be seen by all other processing elements – GPU or CPU
Read the whole article on AMD Blogs.