A Quantitative Study of Locality in GPU Caches for Memory-Divergent Workloads

Sohan Lal, B Sharat Chandra Varma, Ben Juurlink

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)
162 Downloads (Pure)

Abstract

Abstract: GPUs are capable of delivering peak performance in TFLOPs, however, peak performance is often difficult to achieve due to several performance bottlenecks. Memory divergence is one such performance bottleneck that makes it harder to exploit locality, cause cache thrashing, and high miss rate, therefore, impeding GPU performance. As data locality is crucial for performance, there have been several efforts to exploit data locality in GPUs. However, there is a lack of quantitative analysis of data locality, which could pave the way for optimizations. In this paper, we quantitatively study the data locality and its limits in GPUs at different granularities. We show that, in contrast to previous studies, there is a significantly higher inter-warp locality at the L1 data cache for memory-divergent workloads. We further show that about 50% of the cache capacity and other scarce resources such as NoC bandwidth are wasted due to data over-fetch caused by memory divergence. While the low spatial utilization of cache lines justifies the sectored-cache design to only fetch those sectors of a cache line that are needed during a request, our limit study reveals the lost spatial locality for which additional memory requests are needed to fetch the other sectors of the same cache line. The lost spatial locality presents opportunities for further optimizing the cache design.
Original languageEnglish
Pages (from-to)189-216
Number of pages28
JournalInternational Journal of Parallel Programming
Volume50
Early online date5 Apr 2022
DOIs
Publication statusPublished (in print/issue) - Apr 2022

Bibliographical note

Open Access funding enabled and organized by Projekt DEAL.

Publisher Copyright:
© 2022, The Author(s).

Publisher Copyright:
© 2022, The Author(s).

Keywords

  • GPU caches
  • Cache performance
  • Cache
  • Hardware Acceleration
  • Computer Architecture
  • Architecture Simulators
  • Parallel Programming
  • Data locality
  • Memory divergence
  • Article

Fingerprint

Dive into the research topics of 'A Quantitative Study of Locality in GPU Caches for Memory-Divergent Workloads'. Together they form a unique fingerprint.

Cite this