What are the challenges and opportunities offered by Exascale Computing?
ARISTOTE [1], the independent philotechnical society which brings together users of digital technologies from public and private organizations, business schools, SMEs and associations, has organized on May 23, 2019 a scientific workshop dedicated to exascale computing at Ecole polytechnique in Palaiseau, France [2].
Sharing experience
The workshop was held to promote the exchange of experience about scientific HPC & AI applications. This year was to dedicated to new generation computing systems towards exascale. Nowadays the new challenges in high- performance computing are to achieve at least one quintillion (a billion billion), whether in terms of flotting-point operations per second (exaFLOPS) or in terms of storage capacity; while improving the energy efficiency of the technical equipment.
At the opening session, statements were made by Mr Laurent CROUZET, Project Manager at the Ministry of Higher Education and Research, about the European High-Performance Computing Joint Un- dertaking – EuroHPC. Then, different topics were presented by different members of the scientific and industry communities CNRS, CEA, GENCI, IRISA, LABRI, CERFACS, X/IPSL and ONERA. The general topics were about the exascale computing systems (hardware and software) and HPC & AI applications as scientific understanding of air quality [3], numerical climate modelling [5], simulation of two-phase flow combustion and turbulence modelling [4].
Developing HPC Supercomputers inside and outside Europe
Currently, most of the powerful machines are available out of Europe as USA, Japan and China. In addition the existing EU supercomputers depend on non-European technology. Hence, European HPC applications and data are increasingly processed outside the EU. This situation may create problems related to privacy, data protection, commercial trade secrets and ownership of data. For this reason, EuroHPC Joint Undertaking [6] was funded to coordinate the European countries strategies and investments to develop HPC supercomputers in the EU.
Every country or consortium of countries in the world is competing to build the top-of-range exascale supercomputers. The first exascale supercomputers are announced for the coming years. Intel Corporation in partnership with Cray Inc. and Aragonne Laboratory will deliver Aurora, the U.S.’s first supercomputer which will be operational by the end of 2021 [7]. China has built three exascale supercomputer prototypes Sugon, TIANHE-3 and Sunway. The first exaflop supercomputer will be TIANHE-3 which is scheduled to boot up in 2020 [8]. Fujitsu and Japanese research institute Riken are also aiming to deliver an exascale supercomputer Post-K which should be publicly operating in 2021 [9]. In addition, the European Union also crosses the exascale threshold. The target is to have at least two pre-exascale computers by 2020 and to reach full exascale performance by 2023 [10].
The growing needs for exascale supercomputers
Nowadays, High-performance computing reached (about 200 petaFLOPS). And it could still increase due to the rapid evolution of the power and the complexity of the computer microarchitectures. Moore’s Law has predicted that the number of transistors in a dense integrated circuit, (i.e. the overall processing power; doubles about every two years. However this law is now running out of steam. Why? Because of the power wall which is one of the major obstacles chip industry is facing, and the great amount of data to be processed which is energy-intensive activity.
To overcome this problem of scaling up and preserve the increase in the performance, new hardware technologies are increasingly heterogeneous. They are based on: more computing cores and accelerators (GPU, FPGA, MPPA, Xeon Phi, TPU, …) and are more energy efficient.
In addition the challenges of software productivity and performance portability become necessary to take advantage of emerging exascale supercomputers. The new parallel and scalable algorithms. They use different paradigms of parallelization at each hardware level (inter/intra nodes, offload, vectorization and instruction level parallelism). They introduce portable and domain-specific languages, APIs and libraries (DSL); use runtimes to efficiently schedule the parallel tasks; apply artificial intelligence in high-performance computing and vice-versa; and use debugging and profiling tools for HPC applications.
The future of Exascale
In conclusion, the future exascale supercomputers will be energy efficient and increasingly heterogeneous including multi/many-cores, hardware dedicated to specific applications. They will have new levels of persistent memories and optical circuits and networks.
Moreover, an effort must be made on the programming practices and exploit the parallelism at every hardware level. It should be noted that among all parallel programming tools, it is announced that with their evolution MPI [11] and OpenMP [12] will remain the most efficient and portable tools on heterogeneous architectures.
Finally, the operating business model of the supercomputers is evolved and changed from CPU.Hour to Watt.Hour.
References