How unified memory blows the SoCs off the M1 Macs

Howard Oakley:

One of the major new hardware features of Apple Silicon Macs, including those launched on 10 November, is that they use “unified memory”. This article looks briefly at what this means, its consequences, and where the M1 and its successors are taking hardware design.

And:

GPUs are now being used for a lot more than just driving the display, and their computing potential for specific types of numeric and other processing is in demand. So long as CPUs and GPUs continue to use their own local memory, simply moving data between their memory has become an unwanted overhead.

And:

In this new model, CPU cores and GPUs access the same memory. When data being processed by the CPU needs to be manipulated by the GPU, it stays where it is. That unified memory is as fast to access as dedicated GPU memory, and completely flexible. When you want to connect a high-resolution display, that’s not limited by the memory tied to the GPU, but by total memory available. Imagine the graphics capability of 64 or even 128 GB of unified memory.

And:

Apple’s first M1 Macs are its first convergence of these features: sophisticated SoCs which tightly integrate CPU cores and GPUs, fast access to unified memory, and tightly-integrated storage on an SSD. Together they offer unrivalled versatility, what Apple sees as relatively low-end systems which can turn their hand and speed to some of the most demanding tasks while remaining cool, consuming little power, and being relatively inexpensive to manufacture in volume.

A great read, helps explain some of the speed increases in the M1 chip, and why 16GB of M1 RAM is not the same as 16GB of Intel Mac RAM.