Artikel in Tagungsband INPROC-2020-30

Bibliograph.
Daten
Naseri, Alireza; Totounferoush, Amin; Gonzales, Ignacio; Mehl, Miriam; Pérez-Segarra, Carlos: A scalable framework for the partitioned solution of fluid–structure interaction problems.
In: Computational Mechanics.
Universität Stuttgart, Fakultät Informatik, Elektrotechnik und Informationstechnik.
englisch.
Springer, Mai 2020.
ISBN: 10.1007/s00466-020-01860-y.
Artikel in Tagungsband (Konferenz-Beitrag).
CR-Klassif.G.1.8 (Partial Differential Equations)
J.2 (Physical Sciences and Engineering)
J.3 (Life and Medical Sciences)
KeywordsMehl, Miriam; Pérez-Segarra, Carlos
Kurzfassung

In this work, we present a scalable and efficient parallel solver for the partitioned solution of fluid–structure interaction problems through multi-code coupling. Two instances of an in-house parallel software, TermoFluids, are used to solve the fluid and the structural sub-problems, coupled together on the interface via the preCICE coupling library. For fluid flow, the Arbitrary Lagrangian–Eulerian form of the Navier–Stokes equations is solved on an unstructured conforming grid using a second-order finite-volume discretization. A parallel dynamic mesh method for unstructured meshes is used to track the moving boundary. For the structural problem, the nonlinear elastodynamics equations are solved on an unstructured grid using a second-order finite-volume method. A semi-implicit FSI coupling method is used which segregates the fluid pressure term and couples it strongly to the structure, while the remaining fluid terms and the geometrical nonlinearities are only loosely coupled. A robust and advanced multi-vector quasi-Newton method is used for the coupling iterations between the solvers. Both the fluid and the structural solver use distributed-memory parallelism. The intra-solver communication required for data update in the solution process is carried out using non-blocking point-to-point communicators. The inter-code communication is fully parallel and point-to-point, avoiding any central communication unit. Inside each single-physics solver, the load is balanced by dividing the computational domain into fairly equal blocks for each process. Additionally, a load balancing model is used at the inter-code level to minimize the overall idle time of the processes. Two practical test cases in the context of hemodynamics are studied, demonstrating the accuracy and computational efficiency of the coupled solver. Strong scalability test results show a parallel efficiency of 83% on 10,080 CPU cores.

Volltext und
andere Links
Sprinker Link
Abteilung(en)Universität Stuttgart, Institut für Parallele und Verteilte Systeme, Simulation großer Systeme
Eingabedatum19. Juni 2020
   Publ. Institut   Publ. Informatik