arrow
Volume 21, Issue 4
Simulation of Maxwell's Equations on GPU Using a High-Order Error-Minimized Scheme

Tony W. H. Sheu, S. Z. Wang, J. H. Li & Matthew R. Smith

Commun. Comput. Phys., 21 (2017), pp. 1039-1064.

Published online: 2018-04

Export citation
  • Abstract

In this study an explicit Finite Difference Method (FDM) based scheme is developed to solve the Maxwell's equations in time domain for a lossless medium. This manuscript focuses on two unique aspects – the three dimensional time-accurate discretization of the hyperbolic system of Maxwell equations in three-point non-staggered grid stencil and it's application to parallel computing through the use of Graphics Processing Units (GPU). The proposed temporal scheme is symplectic, thus permitting conservation of all Hamiltonians in the Maxwell equation. Moreover, to enable accurate predictions over large time frames, a phase velocity preserving scheme is developed for treatment of the spatial derivative terms. As a result, the chosen time increment and grid spacing can be optimally coupled. An additional theoretical investigation into this pairing is also shown. Finally, the application of the proposed scheme to parallel computing using one Nvidia K20 Tesla GPU card is demonstrated. For the benchmarks performed, the parallel speedup when compared to a single core of an Intel i7-4820K CPU is approximately 190x.

  • Keywords

  • AMS Subject Headings

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{CiCP-21-1039, author = {}, title = {Simulation of Maxwell's Equations on GPU Using a High-Order Error-Minimized Scheme}, journal = {Communications in Computational Physics}, year = {2018}, volume = {21}, number = {4}, pages = {1039--1064}, abstract = {

In this study an explicit Finite Difference Method (FDM) based scheme is developed to solve the Maxwell's equations in time domain for a lossless medium. This manuscript focuses on two unique aspects – the three dimensional time-accurate discretization of the hyperbolic system of Maxwell equations in three-point non-staggered grid stencil and it's application to parallel computing through the use of Graphics Processing Units (GPU). The proposed temporal scheme is symplectic, thus permitting conservation of all Hamiltonians in the Maxwell equation. Moreover, to enable accurate predictions over large time frames, a phase velocity preserving scheme is developed for treatment of the spatial derivative terms. As a result, the chosen time increment and grid spacing can be optimally coupled. An additional theoretical investigation into this pairing is also shown. Finally, the application of the proposed scheme to parallel computing using one Nvidia K20 Tesla GPU card is demonstrated. For the benchmarks performed, the parallel speedup when compared to a single core of an Intel i7-4820K CPU is approximately 190x.

}, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2016-0079}, url = {http://global-sci.org/intro/article_detail/cicp/11270.html} }
TY - JOUR T1 - Simulation of Maxwell's Equations on GPU Using a High-Order Error-Minimized Scheme JO - Communications in Computational Physics VL - 4 SP - 1039 EP - 1064 PY - 2018 DA - 2018/04 SN - 21 DO - http://doi.org/10.4208/cicp.OA-2016-0079 UR - https://global-sci.org/intro/article_detail/cicp/11270.html KW - AB -

In this study an explicit Finite Difference Method (FDM) based scheme is developed to solve the Maxwell's equations in time domain for a lossless medium. This manuscript focuses on two unique aspects – the three dimensional time-accurate discretization of the hyperbolic system of Maxwell equations in three-point non-staggered grid stencil and it's application to parallel computing through the use of Graphics Processing Units (GPU). The proposed temporal scheme is symplectic, thus permitting conservation of all Hamiltonians in the Maxwell equation. Moreover, to enable accurate predictions over large time frames, a phase velocity preserving scheme is developed for treatment of the spatial derivative terms. As a result, the chosen time increment and grid spacing can be optimally coupled. An additional theoretical investigation into this pairing is also shown. Finally, the application of the proposed scheme to parallel computing using one Nvidia K20 Tesla GPU card is demonstrated. For the benchmarks performed, the parallel speedup when compared to a single core of an Intel i7-4820K CPU is approximately 190x.

Tony W. H. Sheu, S. Z. Wang, J. H. Li & Matthew R. Smith. (2020). Simulation of Maxwell's Equations on GPU Using a High-Order Error-Minimized Scheme. Communications in Computational Physics. 21 (4). 1039-1064. doi:10.4208/cicp.OA-2016-0079
Copy to clipboard
The citation has been copied to your clipboard