Article in Journal ART-2020-09

BibliographyNaseri, Alireza; Totounferoush, Amin; Gonzales, Ignacio; Mehl, Miriam; Perez-Segarra, Carlos David: A scalable framework for the partitioned solution of fluid–structure interaction problems.
In: Computational Mechanics. Vol. 66.
University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology.
pp. 471-489, english.
Springer Verlag, May 2, 2020.
https: //doi.org/10.1007/s00466-020-01860-y.
Article in Journal.
CR-SchemaJ.2 (Physical Sciences and Engineering)
J.3 (Life and Medical Sciences)
I.6.3 (Simulation and Modeling Applications)
KeywordsFluid-Structure Interaction; Partitioned Method; Multi-Code Coupling; Scalability; HPC
Abstract

In this work, we present a scalable and efficient parallel solver for the partitioned solution of fluid–structure interaction problems through multi-code coupling. Two instances of an in-house parallel software, TermoFluids, are used to solve the fluid and the structural sub-problems, coupled together on the interface via the preCICE coupling library. For fluid flow, the Arbitrary Lagrangian–Eulerian form of the Navier–Stokes equations is solved on an unstructured conforming grid using a second-order finite-volume discretization. A parallel dynamic mesh method for unstructured meshes is used to track the moving boundary. For the structural problem, the nonlinear elastodynamics equations are solved on an unstructured grid using a second-order finite-volume method. A semi-implicit FSI coupling method is used which segregates the fluid pressure term and couples it strongly to the structure, while the remaining fluid terms and the geometrical nonlinearities are only loosely coupled. A robust and advanced multi-vector quasi-Newton method is used for the coupling iterations between the solvers. Both the fluid and the structural solver use distributed-memory parallelism. The intra-solver communication required for data update in the solution process is carried out using non-blocking point-to-point communicators. The inter-code communication is fully parallel and point-to-point, avoiding any central communication unit. Inside each single-physics solver, the load is balanced by dividing the computational domain into fairly equal blocks for each process. Additionally, a load balancing model is used at the inter-code level to minimize the overall idle time of the processes. Two practical test cases in the context of hemodynamics are studied, demonstrating the accuracy and computational efficiency of the coupled solver. Strong scalability test results show a parallel efficiency of 83% on 10,080 CPU cores.

Department(s)University of Stuttgart, Institute of Parallel and Distributed Systems, Simulation of Large Systems
Entry dateJuly 20, 2020
   Publ. Institute   Publ. Computer Science