GPU acceleration of an iterative scheme for gas-kinetic model equations with memory reduction techniques

Lianhua Zhu, Peng Wang, Songze Chen, Zhaoli Guo, Yonghao Zhang

Research output: Contribution to journalArticle

Abstract

This paper presents a Graphics Processing Unit (GPU) acceleration of an iteration-based discrete velocity method (DVM) for gas-kinetic model equations. Unlike the previous GPU parallelization of explicit kinetic schemes, this work is based on a fast converging iterative scheme. The memory reduction techniques previously proposed for DVM are applied for GPU computing, enabling full three-dimensional (3D) solutions of kinetic model equations in the contemporary GPUs usually with a limited memory capacity that otherwise would need terabytes of memory. The GPU algorithm is validated against the direct simulation Monte Carlo (DSMC) simulation of the 3D lid-driven cavity flow and the supersonic rarefied gas flow past a cube with the phase-space grid points up to 0.7 trillion. The computing performance profiling on three models of GPUs shows that the two main kernel functions can utilize 56% ~ 79% of the GPU computing and memory resources. The performance of the GPU algorithm is compared with a typical parallel CPU implementation of the same algorithm using the Message Passing Interface (MPI). The comparison shows that the GPU program on K40 and K80 achieves 1.2 ~ 2.8 and 1.2 ~ 2.4 speedups for the 3D lid-driven cavity flow, respectively, compared with the MPI parallelized CPU program running on 96 CPU cores.
Original languageEnglish
Article number106861
Number of pages14
JournalComputer Physics Communications
Volume245
Early online date14 Aug 2019
DOIs
Publication statusPublished - 31 Dec 2019

Keywords

  • GPU
  • CUDA
  • discrete velocity method
  • gas-kinetic equation
  • high performance computing

Fingerprint Dive into the research topics of 'GPU acceleration of an iterative scheme for gas-kinetic model equations with memory reduction techniques'. Together they form a unique fingerprint.

  • Cite this