
April 15-17, 2025 – Kanagawa (Japan)
Following the previous first InPEx meeting, the InPEx workshop held in Reims (France) in 2023 and the InPEx 2024 workshop held in Sitges (Spain), the InPEx community will meet in April 2025 in Japan. The workshop will gather around 100 experts in the HPC fields from Japan, the European Union and the United States.
The 2025 InPEx workshop builds on the results of the InPEx working groups and will address important topics of the current development of the Post-Exascale era:
The workshop will start at 9am on Tuesday 15th of April and will and at 1pm on Thursday 17th of April.
The workshop will be held in Japan at the Rofos Shonan – 1560-44 Kamiyamaguchi, Hayama-machi, Miura-gun, Kanagawa 240-0197.
A detailed agenda of the event can be read below.
This workshop is hosted by the RIKEN Center for Computational Science (R-CSS), with the support of the French program NumPEx (Numeric for Exascale).
Context and objectives of the meeting
USA
Japan - Masaaki KONDO (RIKEN-CCS)
Europe
Each subgroup is divided in two for 120 minutes
Each subgroup is divided in two for 120 minutes
Presentation of subgroups A1 and A2 wrap-up and discussion
15 minutes per subgroup
Each subgroup is divided in two for 120 minutes
Each subgroup is divided in two for 120 minutes
(30 minutes per subgroup: 15 minutes of presentation/15 minutes of open discussion)
(30 minutes per subgroup: 15 minutes of presentation/15 minutes of open discussion)
igital Continuum and Data management
The session will provide a broad overview of the three regional strategies and roadmaps towards the convergence of HPC and AI.
The session will provide a deeper overview of the regional strategies by displaying use cases.
This session aims to present regional and international use cases of interest to explore expectations, design and effective implementation of digital continuum components.
Among the use cases:
In addition to the use-case, the session will explore the concept of a “Continuum Digital Twin.” (CDT) This digital twin could abstract real systems to support cross-facility workflows without requiring direct interaction with physical infrastructure. By sharing only essential information needed for orchestration, the digital twin safeguards sensitive aspects of the real infrastructure. Real-time data flows from the infrastructure to the digital twin, ensuring an efficient and secure data exchange. A similar idea has been developed in the paper “Digital Twin Continuum: a Key Enabler for Pervasive Cyber-Physical Environments”.
In particular, this session will explore the idea of using the CDT concept for optimizing infrastructure (compute + storage + network) allocation and usage, especially from a sustainability point of view.
See Workflows Community Summit 2024, Future Trends and Challenges in Scientific Workflows – https://arxiv.org/abs/2410.14943
Exascale and Post-Exascale applications are becoming increasingly difficult to build, deploy and maintain under the double pressure of the growing machine complexity and the applications’ needs to combine multiple compute and data processing paradigms (HPC, HPDA & AI). To address these challenges, our community needs to foster HPC dev-ops methodologies and tools to enhance productivity and improve interoperability, functionality and performance portability, as well as reproducibility
Topics of discussion cover all HPC dev-ops related subjects, including (but are not limited to):
In this session, we want to explore the increasing importance of rapidly evolving AI-driven and AI-coupled HPC/HPDA workflows in computational science and engineering applications.
These workflows typically involve the concurrent, real-time coupled execution of AI and HPC/HPDA tasks in ways that allow the AI systems to steer or inform the HPC/HPDA tasks and vice versa.
Enhancing the entire computing chain—from AI methods to workflow implementation and orchestration—requires the development of common collaborative benchmarks to allow fast progress and evaluation.
The shared benchmark approach has typically demonstrated its usefulness in deep learning with ImageNet or the BLUE benchmark, which allowed the identification of critical technology like ConvNets or Transformers and stirred the international community toward relevant research directions.
Adapting this methodology to drive progress in the integration of AI into HPC/HPDA software and application developments would allow overcoming traditional limitations and lead to significant effective performance enhancement — measured by science discovery for a given amount of computing — on different computing architectures while also testing various capability and capacity metrics such as accuracy and scalability.
These benchmarks should be based on shared insights from different coupling modes for AI and HPC/HPDA workflows and on the identification of execution motifs most commonly found in scientific applications with well-defined data and comparison metrics.
Adopting standardized practices and tools at the international level is essential for creating a consistent and collaborative framework and accelerating progress
Example use-cases may include:
Adaptive execution for training large AI models
This session will explore the transformative role of Generative AI in scientific research. Key themes include AI-powered hypothesis generation, data augmentation and simulation, and scientific writing and communication with LLMs. Discussions will explore synthetic data generation, discuss models like AlphaFold, GNoME, and FourCastNet, reproducibility and reliability, etc.