Addressing GPU on-chip shared memory bank conflicts using elastic pipeline
Artikel i vetenskaplig tidskrift, 2013

One of the major problems with the GPU on-chip shared memory is bank conflicts. We analyze that the throughput of the GPU processor core is often constrained neither by the shared memory bandwidth, nor by the shared memory latency (as long as it stays constant), but is rather due to the varied latencies caused by memory bank conflicts. This results in conflicts at the writeback stage of the in-order pipeline and causes pipeline stalls, thus degrading system throughput. Based on this observation, we investigate and propose a novel Elastic Pipeline design that minimizes the negative impact of on-chip memory bank conflicts on system throughput, by decoupling bank conflicts from pipeline stalls. Simulation results show that our proposed Elastic Pipeline together with the co-designed bank-conflict aware warp scheduling reduces the pipeline stalls by up to 64.0 % (with 42.3 % on average) and improves the overall performance by up to 20.7 % (on average 13.3 %) for representative benchmarks, at trivial hardware overhead. © 2012 The Author(s).

Bank conflicts

On-chip shared memory

GPU

Elastic pipeline

Författare

C. Gou

Technische Universiteit Delft

Georgi Gaydadjiev

Technische Universiteit Delft

International Journal of Parallel Programming

0885-7458 (ISSN)

Vol. 41 3 400-429

Ämneskategorier

Datorteknik

Datorsystem

Styrkeområden

Informations- och kommunikationsteknik

DOI

10.1007/s10766-012-0201-1

Mer information

Senast uppdaterat

2019-06-28