Optimization Library, Sparse Functionality, Graphics Library Improvements, CUDA 4.1 Enhancements, and much more… AccelerEyes announces the release of Jacket v2.1, adding GPU computing capabilities for use with MATLAB®. Jacket v2.1 delivers even more speed through a host of new improvements, maximizing GPU device performance and utilization.. Notable new features include an Optimization Library and additional functions to our Graphics Library. With Jacket v2.1, we have also extended support for sparse matrix subscripting and made improvements to host-to-device and device-to-host data transfer speeds for complex data. In addition, we have included various GFOR enhancements. Jacket v2.1 now includes NVIDIA CUDA 4.1 enhancements to provide improved functionality and performance (requires latest drivers). Jacket is the premier GPU software plugin for MATLAB®, better than alternative …
Jacket v2.0 Now Available
New Multi-GPU functionality , added support for OpenCL devices, and much more… AccelerEyes announces the release of Jacket version 2.0, adding GPU computing capabilities for use with MATLAB®. Version 2.0 delivers even more speed through a host of new improvements, maximizing GPU device performance and utilization. Notable new features include a multi-GPU interface and support for OpenCL devices. With Jacket v2.0, your M-code is now portable across all major GPU devices, including AMD/ATI, Intel, and NVIDIA chips. Jacket is the premier GPU software plugin for MATLAB®, better than alternative solutions. It is relied upon by thousands of organizations for rapid prototyping and problem solving across a range of government, manufacturing, energy, media, biomedical, financial, and scientific research applications. Multi-GPU Details: …
Stanford GPU Benchmarks: Jacket vs PCT/GPU
Researchers in the Pervasive Parallelism Laboratory at Stanford University recently published work describing a novel framework for parallel computing with a paper entitled, “A Domain-Specific Approach to Heterogeneous Parallelism.” As part of their research, they compared Jacket to the GPU support in the Parallel Computing Toolbox™. The results clearly show that Jacket’s optimizations make a big difference in performance. In this blog post, we highlight 4 algorithms included in their research: NAME DESCRIPTION INPUT Gaussian Discriminant Analysis (GDA) Generative learning algorithm for modeling the probability distribution of a set of data as a multivariate Gaussian 1,200×1,024 Matrix Restricted Boltzmann Machine (RBM) Stochastic recurrent neural network, without connections between hidden units 2,000 Hidden Units 2,000 Dimensions Support Vector Machine (SVM) Optimal …
Beam Propagation Methods – Jacket is 3.5X faster than the CPU and 2X faster than PCT
A couple weeks ago, a GPU-enabled code appeared on MATLAB Central entitled, “A CUDA accelerated Beam Propagation Method [BPM] Solver using the Parallel Computing Toolbox.” In this post, we share a video which showcases how Jacket is much better than PCT at GPU computing, by analyzing performance on this Beam Propagation Method code. To reproduce these results, download the source code here: CUDA_BPM_NOV_04_2010_AccelerEyes These benchmarks were run on an NVIDIA Tesla C2070 GPU versus a quad-core Intel CPU. MATLAB + PCT R2010B were used for the PCT-GPU experiments. MATLAB + Jacket 1.6 (prerelease) were used for the Jacket-GPU experiments. Take Home Message Due to Jacket’s extensive library of GPU functions and its optimized GPU runtime, it performs 3.5X faster than …
Using Parallel For Loops (parfor) with MATLAB® and Jacket
MATLAB® parallel for loops (parfor) allow the body of a for loop to be executed across multiple workers simultaneously, but with some pretty large restrictions. With Jacket MGL, Jacket can be used within parfor loops, with the same restrictions. However, it is important to note that Jacket MGL does not currently support co-distributed arrays. Problem Size Problem size might be the single most important consideration in parallelization using the Parallel Computing Toolbox (PCT) and Jacket MGL. When data is used by a worker in the MATLAB pool it must be copied from MATLAB to the worker, and must be copied back when the computation is complete. Additionally, when GPU data is used, it must then be copied by the worker …