Designing and running turbulence transport simulations using a distributed multiscale computing approach
Poster (konferens), 2013
Multiscale simulation involving slow transport and fast turbulent timescales is one amongst
three key computational challenges for Magnetic Confinement Plasmas, as identified in the
PRACE report “The Scientific Case for HPC in Europe 2012-2020”. Whereas in global gy-
rokinetic simulation the main challenge is parallelization efficiency (global gyrokinetic codes
scaling to a huge amount of cores), the difficulty of the mulstiscale approach rely more on ease
and performance of coupling single scale models together. This coupling requires generic meth-
ods which have to be efficient and portable, especially when one (or more) single scale model is
executed remotely as it may require specific hardware, bigger HPC systems or local databases
access.
The MAPPER project is developing a software infrastructure dedicated to the design and the
execution of such distributed multiscale applications. It relies on a coupling library (MUSCLE)
and few other to control the workflow execution and perform data communication between
the different single scale components (“kernels”). Communication is done in a transparent way
whether the kernels run locally or on a remote HPC system.
We have implemented such application by using the MAPPER infrastructure and stand alone
codes developed within the EFDA Integrated Tokamak Modelling (ITM): 1-D transport equa-
tions solver, 2-D geometry given by an equilibrium code, and transport coefficients given by a
3-D fluxtube code. Due to the non-intrusive approach of the coupling library and to ITM effort
on generic data structures, implementation of kernels is straightforward and the whole appli-
cation is modular. This contribution presents the implementation, performance and preliminary
results obtained with such multiscale method applied on present-day Tokamak configurations.