Speakers





Steve Hammond

Director of the Computational Science Center, NREL
NREL Computational Science Center

DRIVING ADVANCES IN ENERGY WITH HIGH PERFORMANCE COMPUTING

Abstract: In this presentation we will give a brief overview the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL). NREL has provided centralized HPC expertise and resources to advance the leading edge of renewable energy and energy efficiency technologies. We will discuss several application development efforts that are part of the U.S. exascale computing project, targeting effective utilization of exascale systems when they are available. Finally, in the emerging hyper connected world of ubiquitous sensors, automation and machine learning we discuss new challenges and opportunities to predictively model and optimize integrated complex systems.



Paul Gibbon

Head of Computational Science Division
Juelich Supercomputing Centre (FZJ)

OVERVIEW OF EOCOE APPLICATION SUPPORT ACTIVITIES

Abstract: An strategy developed by the Energy Oriented Centre of Excellence (EoCoE) is presented for enhancing the performance of computational models used in a variety of renewable energy domains. Over the course of the 3-year project, 11 community codes were examined in detail and substantially enhanced through a sustained effort involving over 50 domain scientists and tuning experts. It was found that typical applications in this comparatively new sector exhibit a huge range of HPC maturity, from simple parallelization needs to near-exascale readiness. An essential part of this effort has therefore been the introduction of a flexible, quantitative performance assessment of applications using the benchmarking tool JUBE to automatically extract up to 28 different metrics taken with several state-of-the-art performance tools. Hands-on workshops to establish this baseline status were consolidated by longer follow-up actions by joint code-teams comprising members of both developer groups and HPC centres involved with the EoCoE consortium. Examples of successes achieved with this strategy are given, together with an outlook on challenges faced for energy applications with next-generation, pre-exascale architectures.



Massimo Celino

Research Scientist, ENEA, Energy Technologies Department
Information and Technology Division

MATERIALS FOR ENERGY

Abstract: Computational materials modelling plays a crucial role in the design of devices for efficient low cost energy generation and storage. Indeed materials modeling techniques can act as a powerful microscope to characterize the atomic-scale chemical and physical processes to design new and improved macroscopic device-scale properties. Not only high accuracy of models but also high performance computing (HPC) infrastructures, advanced ICT services and a tight collaboration among multidisciplinary experts are needed to impact deeply in the material science for energy sector at the European level. Within EoCoE a research line is fully devoted to the application-oriented design of materials at the nano-scale for more efficient devices for energy applications. New models and results in field of photovoltaics, supercapacitors and batteries will be presented.



Henrik Madsen

Professor, Head of section. DTU COMPUTE
Department of Applied Mathematics and Computer Science
Technical University of Denmark

HOW TO USE AI AND BIG DATA ANALYTICS TO ACCELERATE THE TRANSITION TO A FOSSIL-FREE SOCIETY

Abstract: This talk describes a framework, called the Smart-Energy Operating-Systems (SE-OS), for controlling the electricity load in future integrated energy systems using big data analytics, cyber physical models, IoT, IoS and cloud computing. We shall focus on methods based on big data analytics for characterizing and enabling the energy flexibility in, e.g., buildings, supermarkets, wastewater treatment plants. A primary purpose of the SE-OS framework is to control the power load in integrated energy systems. But the framework can also be used for providing ancillary services (like frequency control, voltage control, and congestion management) for power networks with a large penetration of wind and solar power. The set of methodologies is based on grey-box modelling, forecasting, optimization and model predictive control for integrated (power, gas, thermal) energy systems. We will demonstrate that by carefully selecting the cost function associated with the optimal controllers, the system can ensure energy, cost and emission efficiency. Consequently, by using online-predicted values of the CO2 emission of the related power production, the framework provides a way to accelerate the transition to a fossil-free society.



Zacharias Nicolaou

Computational Scientist at CyI
Cyprus Institute, Cyprus

HOW SHORTEST-PATH ALGORITHMS ACCELERATE WEATHERFORECASTING SIMULATIONS

Abstract: Weather-forecasting codes include complex chemical mechanisms which describe a wide range of chemical processes in the atmosphere. Chemistry has a direct effect on the evolution of key prognostic variables which are of increasing interest to the energy industries. These include wind-speed, sun-shine levels, temperature etc. However, the numerical integration of all associated species presents a heavy computational workload, and most forecasting codes are ran without the chemistry component. In an effort to accelerate simulations including chemistry, a novel method namely Direct Relation Graphs, has been employed in order to reduce the chemical complexity of the system. Direct Relation Graphs identify strong relationships between key target species, and an efficient route-finding algorithm is then used to obtain the strongest path linking the set of target species. This results in a reduction in the total number of species solved for, and significantly accelerates forecasting simulations. Such accelerated simulations can be employed for producing enhanced weather-forecasts which are invaluable to the energy sector. In this talk, anintroduction to the method will be presented, as well as results from direct implementation of the method in a popular weather-forecasting code namely WRF-Chem.



Julien Bigot

Researcher
lternative Energies and Atomic Energy Commission (CEA)

PDI, A LIBRARY TO DECOUPLE APPLICATIONS FROM IO CONCERNS

Abstract: In this talk, I will present the Parallel Data Interface (PDI), a declarative API to decouple application codes from the Input/Output strategy to use. I will present its plugin system that supports the selection of the best suited existing IO library through a configuration file in each part of the code depending on the hardware available, the IO pattern, the problem size, etc. I will demonstrate the advantage of this approach in term of software engineering and performance through the example of the Gysela5D code.



Adel El Gammal

General Secretary
EERA

THE EUROPEAN ENERGY RESEARCH ALLIANCE

Abstract: Adel El Gammal, Secretary General of the European Energy Research Alliance (EERA) will speak on the strategic interest of the Energy Research for High Performance computing. He will first present the activities of EERA and position them within the EU eco-system , highlighting the current and expected contributions of EERA to the EU Energy Transition and in particular, in the EC SET Plan. Then, capitalising on the initial interactions of different members of EERA and EoCoE over the last year, he will discuss possible options to materialise the collaboration between EERA and the EoCoE in the future.



Pietro Asinari

Full Professor
Politecnico di Torino - Department of Energy

MULTISCALE SIMULATION OF THE THERMAL PROPERTIES OF MATERIALS FOR ENERGY APPLICATIONS

Abstract: Multiscale simulation of the thermal properties of materials offers unique opportunities, but also remarkable challenges, for addressing the engineering needs of energy applications. In this talk, three examples will be discussed about (i) nanofluids, namely suspensions of nanoparticles (NPs), (ii) nanostructured materials for sorption heat storage and (iii) polymeric composite materials reinforced with carbon nanofillers. More specifically, the consequences of adding nanoparticles in traditional fluids will be analysed with regards to its impact on the macroscopic thermo-physical properties of the bare fluid. Moreover how molecular simulations can be used for a better understating of transport phenomena occurring in the adsorption/desorption phases of sorption thermal batteries will be presented. Finally, the thermal properties of polymeric composite materials reinforced with carbon nanofillers will be investigated. For all these relevant cases for the energy sector, comprehensive multi-scale modelling approaches and the corresponding computational protocols by high performance computing (HPC) will be discussed.



Hebert Owen

Senior Researcher
Barcelona Supercomputing Center (BSC)

COMPUTATIONAL FLUID DYNAMICS FOR WIND ENERGY

Abstract: Computational fluid dynamics (CFD) plays a vital role in the decision-making process before the construction of a wind plant, especially for complex terrain where simplified models cannot provide sufficient accuracy. CFD allows extrapolating measured data at a couple of masts to the whole region of interest. Thus helping to provide an estimate of the energy the wind farm will produce and guiding the positioning of the wind turbines. Barcelona Supercomputing Center (BSC) collaborates with Iberdrola (https://www.iberdrola.es) on wind resource assessment. As part of this collaboration, the CFD version of code ALYA (https://www.Barcelona Supercomputing Center (BSC).es/es/computer-applications/alya-system) developed at Barcelona Supercomputing Center (BSC) has been adapted so that Iberdrola can use it as an alternative to commercial software for wind farm assessment. This approach has several advantages. The wind farm assessment tool is based on ALYA, a code designed to run efficiently on supercomputers comprising many thousands of processors, which in turn permits simulations using significantly finer meshes than those possible with commercial code. Furthermore, since Barcelona Supercomputing Center (BSC) is the developer of ALYA, new models can rapidly be implemented and tested. RANS turbulence models are the standard approach for wind farm assessment, but recently LES models are also being considered since they can become feasible with the advent of Exascale computers. In this talk, the improvement to the code obtained during EoCoE will be presented.



Mathieu Lobet

Engineer
lternative Energies and Atomic Energy Commission (CEA)

HIGH-PERFORMANCE COMPUTING AT EXASCALE: CHALLENGES AND BENEFITS

Abstract: Today’s most powerful supercomputers reach a peak performance of 100 petaflops per second and an mean performance of 25 petaflops per second (average obtained over the 10 first most powerful supercomputers in the world). The next-generation exascale supercomputers will reach the exaflop computing capacity. In the quest to achieve such a massive computing capacity, US, China, Japan and Europe have already announced their projects to build an exascale facility by 2020. However, the goal to realize it under 20 MWatt of power consumption is still a defying hurdle, particularly as the Sunway TaihuLight in China has almost reached this limit. Energy consumption is dominated by 2 main aspects: the computing units and the network. Although many strategies have been developed in the recent past, such as artificially extending the Moore’s Law, retaining the energy consumption and speeding-up the network communications, today’s technologies will not be sufficient to overcome the exascale challenge. Exascale machines will have to face several obstacles, such as the management of an extremely large number of nodes, the design of an extremely efficient network and the development of an adequate software stack. Nodes will be fat and heterogeneous with a significant number of energy-efficient cores coupled with accelerators of different natures. Most suitable technologies are still under exploration, but the prototypes and intermediate machines will be available in the next few years progressively drawing the future of HPC. It is sure, however, that several different technologies will co-exist at the beginning until the best ones would prevail. In this regard, developers will have to adapt their codes on the most appropriate architecture for their applications without restricting and locking themselves to a specific technology. In this presentation, the Exascale computing challenge will be presented while keeping a look on current and forthcoming architectures. Exascale potential computing and accelerator technologies will be reviewed (CPU, GPU, FPGA, ARM) with the pros and the cons. We will then focus on programming challenges on very recent and future architectures. For this aim, some results from the Particle-In-Cell code Smilei, the Tokamak simulation code Tokam3X and the Material science code MetalWalls will be used as examples.



Yvan Notay

Research Director, F.R.S.-FNRS
Université Libre de Bruxelles (ULB)

THE AGMG SOLVER IN EOCOE APPLICATION CODES

Abstract: AGMG is a linear system solver tailored for the very large systems arising from the discretization of scalar elliptic partial differential equations. AGMG is user friendly and of black box type. It can thus substitute any in house or direct solver, and may therefore be useful in the many simulation software codes based on partial differential equations. This includes several EoCoE application codes, and results obtained in these contexts will be presented, further highlighting the speed, the robustness and the scalabality of AGMG.



Matthew Wolf

Research Scientist
University of Bath, United Kingdom

MESO-SCALE MODELLING OF CHARGE TRANSPORT IN HALIDE PEROVSKITES

Abstract: The fundamental nature of charge carrier transport (band-like or polaronic), and the influence thereupon of various scattering mechanisms and defect distributions are of central importance to the operation of semi-conductor based devices. While there have been numerous investigations aiming to understand these effects in hybrid halide perovskites, there remains much to be understood. The structural and compositional complexity of perovskite based solar cells renders it extremely difficult to disentangle these effects, and theoretical simulations can provide valuable insights and predictions. So far modelling has focused on atomistic and continuum length scales, but a model bridging these scales, while taking into account all of the aspects described above, is lacking. Here, we will describe a “device Monte Carlo” meso-scale model, based on well established semi-classical transport theory, which takes into account the band structure of the material, phonon and defect scattering, and electrostatic fields arising from inhomogeneities in defect and carrier concentrations, using parameters derived from experiment and ab initio calculations. We will present the results of the application of this model to charge carrier transport in hybrid halide perovskites, with a particular emphasis on current–voltage characteristics and the experimentally observed effects of changing defect distributions under illuminatation.



Steve Lisgo

Computational Plasma Physicist
Tungsten Divertor & Plasma-Wall Interactions Section
ITER Organisation

SYSTEMS ANALYSIS WITH ARTIFICIAL INTELLIGENCE BASED PLANET GAMIFICATION

Abstract: Avoiding the collapse of civil society is an interesting optimization problem, since adapting to climate change should be done without making the problem worse. Unfortunately, an efficient deployment of resource this century requires a good model for the future, which is lacking. In an effort to address this issue, it is proposed that a world simulator be developed and packaged in a user-friendly game interface. Human and AI players control the population and advance to the year 2100, looking for positive outcomes, with observed trends used to guide real world decisions. State-of-the-art environment and climate models are incorporated into the code chain via an integrated modelling infrastructure. A commitment to open collaboration is required from members of the HPC research community. Parallels with fusion reactor design, which is another exercise in complex system analysis, will be drawn.



Jean Jacquinot

Senior Advisor to the DG Cabinet of Director-General
ITER Organization

HPC NEEDS ON THE PATH TO CONTROLLED MAGNETIC FUSION ENERGY PRODUCTION

Abstract: Research on energy production from thermonuclear fusion of hydrogen isotope plasmas confined by magnetic fields has changed scale during the last decade. This was motivated by convincing results obtained with JET, the large European experimental device operating in Culham, UK as well as from research made in all developed countries. Thirty five countries grouped in 7 partners have now undertaken to jointly construct and operate ITER, a 15 billion Euro machine, in Cadarache, Provence France. In addition, large new national programs have developed in Asia (China, India, Japan and Korea) and an ambitious ‘broader approached program’ has been established between Europe and Japan. The need for Penta and even Exa scale HPC is becoming increasingly pressing for addressing a number of applications such as realistic simulations of self-organization driven by nonlinear turbulence in hot large plasmas, real time control using multiple actuators and data processing from more than 50 highly complex diagnostics. Although good success has already been obtained with HPC so far, it is clear that the increase of physical scale and the demands for precise understanding when moving towards large scale energy production justifies the request for the availability during this new era of much increased HPC power.



Yanick Sarazin

Research Scientist. Research Institute on Controlled Magnetic Fusion
lternative Energies and Atomic Energy Commission (CEA)

CRITICAL OUTCOMES OF TURBULENCE AND TRANSPORT SIMULATIONS TOWARDS ITER RELEVANT REGIMES

In the route towards harnessing controlled magnetic fusion as a source of energy production, high performance computing plays an important role. Three critical issues require dedicated numerical studies in view of securing and optimizing plasma discharges in next step machines like ITER: the issue of heat loads and plasma-wall interaction, the magneto-hydrodynamic (MHD) stability and dynamics of the magnetic configuration, and the overall energy confinement efficiency of the configuration, mainly governed by turbulence. We will report on recent advances in these three topics, for which EoCoE has permitted to alleviate critical bottlenecks and to perform simulations closer to the ITER relevant parameter regime. The main results are detailed in the following.
- The 3-dimensional (3D) fluid code TOKAM3X reveals how the plasma properties are affected by the magnetic configuration at the edge. Moving from a limiter to an axi-symmetric X-point boundary strongly modifies the heat load pattern on the divertor target plates, and leads to the spontaneous onset of an edge transport barrier. Also, the critical impact on transport of a realistic source of neutrals, coupled for the first time to a turbulence code, has been unraveled.
- 3D MHD simulations with the JOREK code have shown that the divertor heat flux pattern is strongly influenced by the flows which naturally develop at the edge. Importantly, they have also confirmed – and shed light on the associated physical mechanism – the experimentally reported possibility to control potentially threatening Edge Localized Modes (ELMs) by means of resonant magnetic perturbations.
- 5D simulations of turbulent and collisional transport with the gyrokinetic code GYSELA have been extended to the very core and the outer region, hence providing realistic boundary conditions. In that respect, the interplay between the core and the peripheral plasma region reveals essential in explaining the experimentally reported edge plasma turbulence. Also, numerical upgrades have permitted the implementation of the kinetic response of the electrons – previously neglected – which opens the route to particle transport studies and the interplay between ion- and electron-driven turbulence regimes, relevant for ITER plasmas.



Bibi S. Naz

Research Scientist, Institute of Bio- and Geosciences Agrosphere (IBG-3) Juelich Supercomputing Centre (FZJ), Juelich, Germany

CONTINENTAL-SCALE HIGH RESOLUTION TERRESTRIAL HYDROLOGIC MODELING OVER EUROPE

Abstract: Continental-scale hydrological research is becoming more important as climate variability and change, and anth ropogenic impacts are increasing which can take effect over large spatial scales. Accurate and reliable hydrologic simulations are important for many applications, such as water resources management, future water availability projections and predictions of extreme events. However, the accuracy of water balance estimates is limited by the lack of observations at large scales and the uncertainties of model simulations due to errors in model structure and inputs (e.g. hydrologic parameters and atmospheric forcings). This leads to the need for physics-based high resolution large scale hydrological models. We present a high resolution (3 km) hydrological model of continental Europe using the integrated Terrestrial Systems Modeling Platform (TerrSysMP) to simulate continental-scale hydrologic estimates of soil moisture, surface runoff, discharge and total water storage. To evaluate uncertainties in our simulated estimates, the assimilation experiment was also conducted over a time period from 2000 – 2006 with the Community Land Model, version 3.5 (CLM3.5) integrated with the Parallel Data Assimilation Framework (PDAF) over Europe. The model was forced with the high-resolution reanalysis COSMO-REA6 from Hans-Ertel Centre for Weather Research (HErZ). Using this modeling framework, the coarse-resolution remotely sensed ESA CCI soil moisture (SM) daily data were assimilated into TerrSysMP-PDAF. The impact of remotely sensed soil moisture data on improving continental-scale hydrologic estimates was analyzed through comparisons with independent observations including ESA CCI-SM, E-RUN runoff, GRDC river discharge and total water storage from GRACE satellite. The results demonstrate the potential of assimilating satellite soil moisture observations to improve high-resolution hydrologic model simulations at the continental scale, which is useful for water resources assessment and monitoring.



Jonas Berndt

Post-doc, Institute of Energy and Climate Research Juelich Supercomputing Centre (FZJ)

ON THE PREDICTABILITY OF EXTREME WIND AND PV POWER FORECAST ERRORS -AN ULTRA LARGE ENSEMBLE APPROACH

Abstract: Though infrequent by definition, extreme error events in numerical weather predictions and the consequent power predictions for wind and solar plants have disproportionately costly effects on grid stability and energy markets. The insufficient predictability of such events rests on limitations of state-ofthe- art numerical weather predictions systems and must therefore be furnished with likelihood, implying the operation of model ensembles. While such probabilistic forecasts give some insight about the expected model forecast error, present computational resources restrict operational meteorological ensembles to a small ensemble size, such that no good estimate of the likelihoods of more extreme, low probability events can be provided. Smaller ensembles also do not indicate whether such errors should more likely result in over- or undersupply events, which would inform the appropriate course of action to avert risk for operators and stakeholders.
Within the EoCoE project, we increase the sizes of meteorological ensembles from O(10) to O(1000) to accomplish an improved approximation of the probability density function. For this purpose, numerical weather predictions are calculated utilizing Ensembles for Stochastic Integration of Atmospheric Simulations (ESIAS), a novel approach of an ensemble control system developed at Forschungszentrum Jülich that applies the Weather Research and Forecasting (WRF) Model and the particle-filtering technique for non-linear ensemble-based data assimilation. The resulting meteorological data are converted to power forecasts using two power models applied at Fraunhofer IEE, a physical grid model for regional wind forecasts and a probabilistic regional PV model for solar power production.
The ultra-large ensemble yield probabilistic forecasts with resolved higher-order statistics that indicate extreme error events. We use random sampling of the ultra large ensemble group to investigate how ensemble size affects statistical indicators and what ensemble sizes may be sufficient for anticipating future extreme error events. Results are accomplished on the basis of a six-month period of reduced ensemble size and model resolution, with the full system being applied based on reasonable indication, yielding implications for the establishment of an operational extreme forecast error warning system.



Prof. Julien J. Harou

Chair in Water Engineering, School of Mechanical, Aerospace & Civil Engineering University of Manchester, UK

WATER-ENERGY SYSTEM SIMULATION FOR INFRASTRUCTURE INVESTMENT ANALYSIS

Abstract: River basin development that appropriately allocates water for multiple purposes is key to the socio-economic development of many countries where demands for energy, irrigation, water supply, flood control and ecosystem services are increasing. This talk describes a decision-making approach for water-energy- food-environment (WEFE) system design. The approach aims to enable more transparent, efficient and effective decision-making by evaluating and optimising interventions (new assets and/or policies) within complex interdependent human-environment systems. Early results demonstrate the benefits of co-designing dams in conjunction with other water, energy, food, ecology (WEFE) resource systems considering their synergies and trade-offs. As an application we consider the question of managing and planning investments in dams and systems of dams. To this end we are building a suite of open software tools to help rapidly conduct such multi sectoral assessments. The goal is to enable users to build and share system simulation models and link them to design under uncertainty approaches. A key innovation proposed is to co-represent river basin and energy systems into a linked simulation that can optimise the role of hydropower within both water-food-ecology and energy systems. The planned outcome is a resource system design approach and associated tool set that helps understand how interventions like dams impact people, economies, and ecosystems and enables achieving the SDGs in a warming and uncertain world.



Dr. Slavko Brdar

Research Scientist, Institute for Advanced Simulation (IAS)
Juelich Supercomputing Centre (FZJ)

PERFORMANCE EVALUATION OF VARIOUS ACCELERATOR ENABLED LINEAR ALGEBRA LIBRARIES AND BOOSTER ARCHITECTURES THROUGH MINIAPPS

Abstract: Recently, hydrological simulations are required to run over continental domains at high resolution over long time periods of time in order to analyze climatological impacts on soil and groundwater. Performing simulations at these time scales becomes computationally expensive, thus even small gains in solver performance can considerably reduce computing time and energy consumption. We inspected three different architectures to asses their efficiency. In order to avoid restructuring complex legacy codes for every specific architecture, we applied the concept of MiniApps. These applications focuse on the main computational kernel of an original code, which for many application breaks down into solving linear systems of equations. The Python-based ParFlow MiniApp builds a system of linear equations that is analogous to the system of linear equations of the original code and based on the two-point flux approximation method for flow through a heterogeneous porous media. We employ PETSc solver bindings, which are readily available for CPU clusters, and on accelerated KNL and GPU clusters.



Giorgio Giordani

Post-doc
lternative Energies and Atomic Energy Commission (CEA)

ADVANCED NUMERICAL METHODS FOR PLASMA-EDGE SIMULATIONS IN TOKAMAKS

Abstract: The plasma-edge is the outer part of tokamak plasma, encompassing the outer core region until the plasma-facing components. Modelling the dynamics of the plasma-edge is crucial to enhance the performance of the tokamak, in terms of confinement and heat transfer to the walls, and also to design optimized operation scenarios. In the IRFM, the 3D turbulent code TOKAM3X is developed to analyze the turbulent heat and mass transfer in the plasma-edge. TOKAM3X is designed to run in a massively parallelized environment. One of the most important bottlenecks of the code is the inversion of the so-called 3D vorticity problem, which allows computing the electric potential in the machine. This problem takes the form of an implicit 3D linear system corresponding to an extremely anisotropic elliptic operator. The EoCoE collaboration network has allowed tackling this issue with new weapons, that is, three iterative solvers: an in house GMRES, the solver AGMG and Maphys, that have been tested in TOKAM3X. Preliminary results are very promising. In parallel, other activities of improvement of the code have been undertaken, including the development of a new numerical scheme based on a high-order discontinuous Galerkin scheme. This new scheme is based on non-aligned computational grids, and will introduce new capabilities in the landscape of fluid solvers for the plasma-edge, such as for example, the possibility of computing the transport during a magnetic equilibrium evolution. This will allow performing simulations of tokamak startup and control operations in realistic geometry for both the plasma and the reactor’s wall (a world-wide unique capability), and it will also permit to enhance the consistency and flexibility of equilibrium-transport simulations.



Jesus Labarta

Director Of The Computer Sciences Department
Barcelona Supercomputing Center (BSC)

THE POP PROJECT

Abstract: The talk will present best practices on performance analysis being promoted within the POP CoE project.The approach is based on reporting performance metrics with deep semantic implications that give insight on fundamental aspects of parallel and sequential behavior of programs. This methodology provides a general framework for communication between performance analysts not specialized on the applications and their developers that results in precise suggestions on how to refactor applications towards a more efficient usage of the computing resources. We will show some of the deep analyses that can be performed on large production applications with the performance tools being used in POP.



Guido Huysmans

Research scientist European Commission (IRFM, CEA Cadarache)
Professor at Eindhoven University, ITER Scientist Fellow
European Commission

SIMULATIONS OF MAGNETOHYDRODYNAMIC INSTABILITIES AND THEIR CONTROL FOR ITER

Abstract: The main goal of the ITER project is to create plasmas producing about 500 MW of fusion power for 300-500s. With approximately 50 MW of power required to heat the plasma, this amounts to a power amplification of a factor Q=10. Extrapolation from current tokamak experiments show that a minimum size is required to obtain a Q=10 plasma. The toroidal ITER plasma will have a major radius of 6.2m and a minor radius of 2m. Due to its large size, the thermal energy (350MJ) will be much larger than current experiments. Magnetohydrodynamic (MHD) instabilities, global instabilities of the magnetic structure of the plasma driven unstable by the plasma pressure and current, can cause fast losses of the thermal plasma energy. A typical time scale is of the order of 1ms. Disruptions are characterised by a total loss of the thermal plasma energy within several milliseconds due to MHD instabilities. The more localised MHD instability, the so-called Edge Localized Mode (ELM), can cause a loss of 1-10% of the thermal energy within 1 ms. ELMs are occurring repetitively with a typical frequency of 1-100 Hz. In present tokamak experiments the effect of the fast MHD induced energy losses to the first wall of the machine are mostly tolerable. However in ITER the estimated heat fluxes for unmitigated disruptions and ELMs are likely to be beyond the melting limits of the plasma facing components. This imposes a strong requirement for the control of these MHD instabilities. The main method for control of disruptions is the injection of massive amounts of gas through the injection of shattered pellets (ice cubes). ELMs can be controlled through the application of small 3D magnetic field perturbations or through the injection of small pellets. Large scale 3D nonlinear simulations of these MHD instabilities are required, firstly, to improve our understanding of the detailed physics mechanisms of the instabilities and their control methods. Secondly, as ITER plasmas will be in a different plasma regime than current experiments, MHD simulations are required to extrapolate the control requirements towards ITER. The non-linear MHD code JOREK has been developed within the EU fusion program for this purpose. The code uses cubic finite elements to solve the MHD equations within the whole domain; from the main plasma up to the machine walls, including the effects of the metallic conducting structures and the magnetic field coils. Recent extensions include a discrete particle model to describe the evolution of neutrals, impurities and fast particles. The main applications concentrate on the physics of ELMs and disruptions and their control methods, including validation of the simulation results. Following an introduction to ITER and MHD instabilities, characteristic features of the JOREK code will discussed together with some recent illustrative applications.



Hervé Guillard

Senior Research Scientist
Inria Sophia Antipolis Mediterranee

FLUX ALIGNED MESH GENERATION FOR TOKAMAKS

Abstract: Hervé Guillard, Jalal Lakhlili, Adrien Loseille, Alexis Loyer, Boniface Nkonga, Ahmed Ratnani, Ali Elarif The generation flux aligned mesh for tokamaks is a difficult task that is not easily automated and very often ask for manual intervention and specific expertise. This talk will present the work done in the framework of the EoCoE action to design and build a software for this task.



Jeff Cumptson

Research Fellow
Juelich Supercomputing Centre (FZJ)

FORMULATION FOR OPTIMIZATION UNDER UNCERTAINTY

Abstract: In this project, the Cyprus Institute has incorporated new methods for predicting aerosol concentrations in the local region in order to inform forecasts of direct normal irradiance suitable for use in concentrating solar power plant operational simulations. The forecast of DNI provided by the Cyprus Institute represent an error of up to 0.5% up to a time horizon of 48 hours. RWTH Aachen used a suite of rolling DNI forecasts, provided by the Cyprus institute, on an hourly basis with a time horizon of two days, along with forecasts of electricity spot price data generated in-house using an ARIMA model that predicts spot prices based on historical spot price data, as input to the CSP plant scheduling optimisation model. The ARIMA model is accurate to within 10\% over a time horizon of two days. A plant controller heuristic was introduced in order to cope with deviations from the forecast input energy and associated optimal plant set point. The resulting simulated real-time operation of the power plant provided a schedule for plant operation that generated earlier in the day and shut down earlier in the evening. This may be the result of the limited forecast time horizon in comparison to the simulated ideal case. This may also be a result of limitation of the controller to deal with the uncertainties in the forecast data. Comparison of the real-time operation to the ideal case indicate that 98.7\% of the maximal possible revenue has been achieved for this case study.



Urs Aeberhard

Research Scientist, Head of Theory and Multiscale Simulation, IEK-5 Photovoltaik
Forschungszentrum Jülich

COMPUTATIONAL CHARACTERIZATION OF PASSIVATED CONTACTS FOR SILICON SOLAR CELLS

Abstract: Passivated contacts are among the key design elements in high-efficiency silicon solar cell architectures such as the silicon heterojunction solar cell (SHJ) or the tunnel-oxide passivated contact silicon solar cell (TOPCon). In our contribution, we provide a computational approach to the microscopic characterization of structural, electronic and dynamical properties at interfaces of crystalline silicon with both hydrogenated amorphous silicon and amorphous silicon oxide, as developed in the frame of the EoCoE project. The interface configurations are generated using ab initio molecular dynamics, and the electronic structure is analyzed based on density functional theory. Special focus is set on the identification and characterization of localized states at the interfaces. For the case of hydrogenated amorphous silicon, the effect of high-temperature annealing is investigated. For the tunnel oxide architecture, the role of localized states in the transmission of charge carriers through the potential barrier is addressed. Mesoscale charge carrier dynamics (transport and recombination) at the hetero-interfaces is addressed using a state-of-the-art quantum-kinetic simulation framework implemented in the PVnegf code which was optimized in the EoCoE project.



Andrea Galletti

Ph.D Student, Department of Civil, Environmental and Mechanical Engineering
Trento University (UniTrento)

PH.D STUDENT, DEPARTMENT OF CIVIL, ENVIRONMENTAL AND MECHANICAL ENGINEERING,TRENTO UNIVERSITY

Abstract: The Italian Alpine region holds the largest share of hydropower production potential in Italy, accounting for more than 75% of the total hydropower-based installed capacity, and hydropower itself satisfies around 20% of the daily electricity demand in Italy. Climate change will likely impact the potential hydropower production over the Alpine Region, but to date no large-scale, highly resolved models are available. The ability to provide detailed forecasts of spatial and temporal variability in hydropower production potential is crucial in decision making processes, as they are needed both by stakeholders when planning their investments and by environmental agencies when planning new regulations and directives. Here we present two approaches to modeling hydropower production. The first is based on a preliminary correlation analysis performed between production data and other variables of interest such as electricity price, energy demand, observed discharge, which revealed that only discharge can be treated as a reliable predictor of hydropower production at the basin-scale. With this approach we were able to predict 78% of the total hydropower production in the Italian Alpine Region with a correlation of 0.5 or more in every basin. Coupling this approach with discharge time series obtained by hyper-resolved hydrological models (e.g. CLM-ParFlow) we were able to compare the predicted hydropower production with the observed time series. The second approach is referred to as physically based, as it fully models the interaction of anthropogenic infrastructures with the natural hydrologic system. Full detail concerning geometry and operation specifics is given as input to the model, therefore in-depth data collection was necessary to fully characterize the system. The model (i.e. HYPERstream) is able to provide an estimate for hydropower production time series for every reservoir, and has already been validated with historical data, in a case study that will also be presented.



Matteo Valentinuzzi

PhD Researcher
lternative Energies and Atomic Energy Commission (CEA)

HYBRID KINETIC-FLUID MODELING OF NEUTRAL PARTICLES FOR ITER PLASMAS

Abstract: Power exhaust is one of the major challenges of future Tokamaks such as ITER and DEMO. Because of the lack of identified scaling parameters, predictions for plasma conditions in the part of the device designed to handle the exhaust of power and particles (the divertor) usually rely on edge transport codes, which often consist of a fluid code for the plasma (like Soledge2D [1]) coupled to a kinetic Monte Carlo code (such as Eirene [2]) for the neutral particles (atoms, molecules and radiation). The latter incorporates the complex atomic, molecular and surface processes characteristic of edge plasmas. The use of a kinetic description for the neutral gas stems from the fact that in most of the device volume the ratio of the neutrals’ mean free path to a representative physical length (the Knudsen number, Kn, which measures how “kinetic” the neutrals behave) is much larger than one. However, in the divertor region the situation can be very different owing to high density of the order of 1020-1021 m3 and low temperatures, below 5eV, especially for large machines as ITER or DEMO. In these regions i) the kinetic description is too detailed (locally Kn << 1) because neutrals are quasi- Maxwellian and ii) the Monte Carlo approach is very inefficient because neutrals undergo many collisions (charge exchange, elastic collisions) before being ionized or leaving the highly collisional region. A hybrid kinetic/fluid model then becomes appealing for the neutral gas in order to synergize the speed of a fluid code with the precision of a kinetic description. In this presentation we will focus on a Two-Phases model [3] in which the atoms population is divided in two phases, fully fluid atoms and fully kinetic ones, coexisting in the whole domain. Additional processes connecting the two phases are introduced, mimicking evaporation and condensation reactions. The rate coefficients for these processes are calculated from the background plasma, in such a way that kinetic neutrals entering in a highly collisional region condensate into the fluid phase after a few collisions. This entails running the kinetic code, Eirene, at a much lower computational cost, together with a fluid code, here the one presented in [4]. Furthermore, simulations in ITER geometry show that the speed-up in the kinetic code is obtained while introducing only negligible differences in the solution of the coupled plasma-neutrals code.