One of the questions people commonly ask us is: When will Jacket support LAPACK features such as eigenvalue decomposition, matrix inverse, system solvers, etc.? The reason this question is so popular is that people recognize that these kinds of problems are well-suited for the GPU and will end up giving great performance boosts for Jacket users. We are looking forward to delivering these functions in Jacket. Jacket is currently built on top of CUDA. For reasons why we like CUDA, see our previous blog post about OpenCL. While NVIDIA is busy building from CUDA from the ground up, we are busy building Jacket from the top (MATLAB) down. NVIDIA is working hard to promote and develop LAPACK libraries directly into …
Data-parallelism vs Task-parallelism
In order to understand how Jacket works, it is important to understand the difference between data parallelism and task parallelism. There are many ways to define this, but simply put and in our context: Task parallelism is the simultaneous execution on multiple cores of many different functions across the same or different datasets. Data parallelism (aka SIMD) is the simultaneous execution on multiple cores of the same function across the elements of a dataset. Jacket focuses on exploiting data parallelism or SIMD computations. The vectorized MATLAB language is especially conducive to good SIMD operations (more so than a non-vectorized language such as C/C++). And if you’re going to need a vectorized notation to achieve SIMD computation, why not choose the …
The NVIDIA MEX-Plugin & Jacket
One of the first questions people ask when considering Jacket for GPU MATLAB computing is the following: How is Jacket different from the MATLAB plugin on the NVIDIA website (found here: http://developer.nvidia.com/object/matlab_cuda.html)? The short answer to this is that the NVIDIA MEX-plugin requires you to write CUDA code, while Jacket does not. This has many implications and ends up resulting in a lot of advantages for you as a MATLAB programmer. First let’s describe the features of the MEX-plugin: You write CUDA code that solves your problem. You use the MEX configuration files provided by NVIDIA to compile your CUDA code into a MEX file that is callable by MATLAB. MATLAB calls your MEX file, moves data out to the …
OpenCL
We often get questions such as the one we just received via email: 1) Any idea if you will be supporting AMD/ATI cards in future ? 2) Have you considered OpenCL as a potential pathway for the future ? I can see an advantage there for you (if it takes off) in that you’re not tied to a single vendor any more and potentially you’d be able to take advantage of other accelerators that may support it. It’s very early days yet but certainly from our point of view the current paradigm of code to a single vendors card doesn’t seem sustainable.. OpenCL is a community effort to create a standard for parallel computing, with early emphasis on GPGPU computing, …
Welcome!
In an effort to keep people up-to-date with Jacket-related stuff, we are pleased to launch this new blog. This blog will serve a few purposes: it is a place for things that don’t really belong in the documentation, but still need a good explanation it is a place for announcements and updates Other sources of information include: The Jacket User Guide – Official Jacket documentation The Jacket Wiki – Online Jacket documentation The Jacket Forums – Online forums where users can post questions, bugs, experiences, feature requests, etc. We look forward to the launch of this blog and working with they community to make GPU MATLAB computing a valuable addition to your projects.