[Seminar] Scalable Memory Systems in the Multi-Core Era(6/18 2pm~6pm, 6/20 2pm~6pm)
The memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent trends towards increasingly more cores on die, consolidation of diverse workloads on a single chip, and difficulty of DRAM scaling impose new requirements and exacerbate old demands on the memory system. In particular, the need for memory bandwidth and capacity is increasing, applications’ interference in memory system increasingly limits system performance and makes the system hard to control, memory energy and power are key design concerns, and DRAM technology consumes significant amount of energy and does not scale down easily to smaller technology nodes. Fortunately, some promising solution directions exist. In this short course, we will first briefly cover basics of memory systems and examine fundamental tradeoffs. Next, we will describe recent technology, application, and architecture trends and how they change the way we should think of and design memory systems. Finally, we will examine new memory system designs for multi-core architectures to address these trends and requirements. In particular, we will cover recent research on tackling challenges related to scaling the capacity, energy-efficiency, bandwidth, latency, and feature size of main memory. We will potentially examine three major solution rections: 1) how to design more efficient and higher-bandwidth DRAM architectures, 2) how to employ emerging memory technologies in a hybrid memory system, and 3) how to enable more predictable and QoS-aware memory systems. Related papers referenced below provide the reference reading material for the course and would be useful to study beforehand. If time permits or during offline discussions, we will also delve into the design of interconnects for multi-core architectures and acceleration of bottlenecks. Supplementary references are provided below for interested students.
- Lecture schedule
1. Lecture 1 ( June 18 ) - Memory System Fundamentals: DRAM, caches, latency, bandwidth, parallelism - Recent Trends and Challenges in Memory Systems - More Efficient and Higher Bandwidth DRAM Architectures
2. Lecture 2 ( June 20 ) - Emerging Technologies and Hybrid Memory Systems - Predictable and QoS-aware memory systems
Onur Mutlu is an Assistant Professor of ECE (and by courtesy CSD) at Carnegie Mellon University. His broader research interests are in computer architecture and systems, especially in the interactions between languages, operating systems, compilers, and microarchitecture. He enjoys teaching and researching important and relevant problems in computer architecture, including problems related to the design of memory systems, multi-core architectures, and scalable and efficient systems. He obtained his PhD and MS in ECE from the University of Texas at Austin (2006) and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. Prior to Carnegie Mellon, he worked at Microsoft Research (2006-2009), Intel Corporation, and Advanced Micro Devices. He was a recipient of the first IEEE Computer Society Young Computer Architect Award, CMU College of Engineering George Tallman Ladd Research Award, ASPLOS and VTS Best Paper Awards, US National Science Foundation CAREER Award, Microsoft Gold Star Award, University of Texas Graduate Research Excellence Award, and a number of “computer architecture top pick” paper selections by the IEEE Micro magazine. For more information, please see http://www.ece.cmu.edu/~omutlu.
■ 문의 : 이재진 교수 (02-880-1863)