A Study of the Behavior of Synchronization Methods in Commonly Used Languages and Systems
Paper in proceedings, 2013
Synchronization is a central issue in concurrency and plays an important role in the behavior and performance of modern programmes. Programming languages and hardware designers are trying to provide synchronization constructs and primitives that can handle concurrency and synchronization issues efficiently. Programmers have to find a way to select the most appropriate constructs and primitives in order to gain the desired behavior and performance under concurrency. Several parameters and factors affect the choice,
through complex interactions among (i) the language and the language constructs that it
supports, (ii) the system architecture, (iii) possible run-time environments,
virtual machine options and memory management support and
We present a systematic study of
strategies, focusing on concurrent data structures.
We have chosen concurrent data structures with different number of contention spots.
We consider both coarse-grain and fine-grain locking strategies, as well as lock-free methods.
We have investigated synchronization-aware implementations in C++, C# (.NET and Mono) and Java.
Considering the machine architectures, we have studied the behavior of the
implementations on both Intel's Nehalem and AMD's Bulldozer.
The properties that we study are throughput and fairness under different workloads and multiprogramming execution environments.
For NUMA architectures fairness is becoming as important as the typically considered throughput property.
To the best of our knowledge this is the first systematic and comprehensive study of synchronization-aware implementations.
This paper takes steps towards capturing a number of guiding
principles and concerns for the selection of the programming environment
and synchronization methods in connection to the application and the system characteristics.