Data-parallelism vs Task-parallelism

John MelonakosCUDA, OpenCL 1 Comment

In order to understand how Jacket works, it is important to understand the difference between data parallelism and task parallelism.  There are many ways to define this, but simply put and in our context:

  • Task parallelism is the simultaneous execution on multiple cores of many different functions across the same or different datasets.
  • Data parallelism (aka SIMD) is the simultaneous execution on multiple cores of the same function across the elements of a dataset.

Jacket focuses on exploiting data parallelism or SIMD computations.  The vectorized MATLAB language is especially conducive to good SIMD operations (more so than a non-vectorized language such as C/C++).  And if you’re going to need a vectorized notation to achieve SIMD computation, why not choose the most popular vectorized notation – MATLAB!

As an aside, this is one of the main reasons why CUDA, Brook+, Ct, OpenCL, etc. introduce new notation to C/C++, because C/C++ does not easily describe a SIMD computation.

Jacket’s upcoming ‘gfor’ parallel for-loop will enable people to run the iterations of a for-loop simultaneously on the many cores of the GPU.  Essentially this is implemented by converting the problem to a data parallel problem.  Think of the loop interior as the “function”.  This function gets executed many times across a dataset.  That’s data parallelism.  This works due to the restriction that we enforce on ‘gfor’ wherein there can be no data dependencies between loop iterations.  We’ll write more about this in the official documentation once it’s released.

After we’ve done everything we want to do with data parallelism in Jacket, we’ll move on to task parallelism.  In the ideal future, you’d be able to spawn off computations to other CPU cores, GPUs, or some other node and have each spawn execute in an optimally data parallel manner.  This is really cool and something to look forward to down the road.

Of course, this is is meant to be a high-level explanation of things.  There are many better places to find detailed information on this, including straightforward articles on Wikipedia:

Comments 1

  1. Pingback: wikipedia » GPU MATLAB Computing » Data-parallelism vs Task-parallelism

Leave a Reply

Your email address will not be published. Required fields are marked *