What is Unified Memory and how does it work on Apple Silicon?
With the release of Apple’s new M1-powered iPad Pro and 24-inch iMac, there’s renewed interest in the amazing efficiency of the M1 chip. The launch of the M1 chip brought Apple’s first use of unified memory architecture (UMA) on Apple silicon. This new approach to memory enables Apple to squeeze out higher performance from less total RAM. So how exactly does unified memory on Apple Silicon really work? Let’s take a look, starting with a few basics about memory in general and how things are new with the M1 Mac design.
What is RAM and how is the M1 different here?
RAM stands for “Random Access Memory”. This is the main component of the system memory on any computer. System memory provides a temporary repository for data used by your computer at a given instant. Data stored in system memory can include files you’re currently viewing, as well as files needed by macOS. Traditionally, RAM exists physically as a long stick that fits into a slot on your motherboard. The M1 is actually a revolution in this way as well.
Apple designed the M1 as a system on a chip (SoC), with the RAM included as part of this package. While integrating RAM with the SoC is common in smartphones such as the iPhone 12 series, this is a relatively new idea for desktop and laptop computers. Adding RAM to the SoC design enables faster access to memory, improving efficiency.
In addition to physically adding the RAM to the SoC, Apple has changed the fundamental way the system uses memory. This is where unified memory on Apple silicon comes into play.
What is Unified Memory and how does it work?
Unified memory is about minimizing the redundancy of data copied between different sections of memory used by the CPU, GPU, etc. Copying is slow and wastes memory capacity. With a traditional memory implementation, part of your RAM is reserved for the GPU. If your laptop is advertised with 16GB of RAM, and 2GB is allocated to the GPU, you only have 14GB available for system tasks. Apple solves this problem with UMA, making memory allocation more fluid and increasing performance.
Gaming provides the best example to understand the benefits of unified memory. When you play a game on your Mac, the CPU first receives all the instructions for the game and then pushes the data that the GPU needs to the graphics card. The graphics card then takes all that data and works on it within its own processor (the GPU) and built-in RAM.
If you have a processor with integrated graphics, the GPU still maintains its own chunk of memory, as does the processor. The CPU and GPU work on the same data independently and then passes the results back and forth between their memory repositories. If you drop the requirement to move data back and forth, it’s easy to see how keeping everything in the same storage area could improve performance. The unified memory approach truly revolutionizes performance by allowing all components access to the same memory at the same place.
Apple truly achieved greatness with the M1 SoC. In addition to integrating RAM physically, the new unified memory architecture allows more efficient use of available memory. Using this new memory implementation, the new M1 iMacs can do just about anything, including running Windows 10. Placing all memory in a single pool means that any component can ramp up usage when needed, seamlessly allocating resources where needed.