Laplace Transform Inversion on the GPU

ArrayFireCase Studies Leave a Comment

The numerical inversion of the Laplace transform is a long standing problem due its implicit ill-posedness. Patrick Kano and Moysey Brio of Acunum Algorithms and Simulations, with their experience in computational methods and algorithm development, found a solution that not only works, but is very fast.

Their code implements the Weeks’ method for Numerical Laplace Inversion. Apart from casting CPU variables to GPU, etc, the major step involved in Jacketizing the code was as simple as converting a for loop to GFOR! Something like what’s given below:

for nidx=1:Nprod
  Errorvec(nidx) = wfncpuErrorEst( ... );
end
gfor nidx=1:Nprod
  Errorvec(nidx) = wfnjacErrorEst( ... );
gend

The loop in question calls a global minimization function that computes an absolute error estimate for each pair of parameter values in parallel.

Wrapped Weeks CPU Runtime: 28 s

Wrapped Weeks GPU Runtime: 7.48 s

The system used to run timings was a Debian Linux machine with a Tesla C2070 GPU, MATLAB 2010b, and Jacket 1.6.

With Jacket 1.7 and above, it would be interesting to try and optimize this application further. One suggestion from our side would be to explore if it is possible to maintain just one function, wfnErrorEst, that works on both CPU and GPU inputs (this technique is discussed in this blog post).

The full code can be downloaded here. It provides example images, instructions as well as speedup figures.

We thank Patrick Kano and Moysey Brio for their efforts in tackling this tough problem and finding a solution that uses GPU computing with Jacket.

Leave a Reply

Your email address will not be published. Required fields are marked *