Exciting Updates from AccelerEyes

John MelonakosAnnouncements 4 Comments

We are pleased to announce today that MathWorks and AccelerEyes have started working together to provide the best overall solution for GPU computing in MATLAB® through the Parallel Computing Toolbox™ and MATLAB Distributed Computing Server™ from MathWorks. This new relationship will result in great product updates for end users of the Parallel Computing Toolbox™ and MATLAB Distributed Computing Server™. Since 2007, AccelerEyes has been a leader in developing GPU software, including Jacket.  AccelerEyes has sold Jacket as a 3rd-party add-on to the MathWorks MATLAB® product.  Effective today, AccelerEyes will discontinue new Jacket product sales.   All existing Jacket license holders will continue to receive support and maintenance from AccelerEyes for 1 year. All existing Jacket licenses are perpetual and will not expire.  Future GPU computing updates …

CUDA GPUs Boost Mars Research

ArrayFireCase Studies, CUDA Leave a Comment

With the recent news release from NASA about the Mars Curiosity rover, and as a continuation of our previous post “Powering Mars Research”, Brendan Babb is here again to provide us with an exciting look into Jacket’s role in Mars research from the Curiosity rover . Brendan Babb and colleague Frank Moore, at the University of Alaska in Anchorage, work with NASA’s Jet Propulsion Lab to improve image quality and image compression of the Mars Rover images. Here is what Brendan had to tell us about the use of Jacket in his GPU computing challenges… Brendan Babb:  I was thrilled to watch the new Mars Rover Curiosity successful landing with my visiting nieces and nephews. The new rover will take pictures, …

Jacket v2.3 Now Available

John MelonakosAnnouncements, CUDA 1 Comment

We are pleased to announce the new release of Jacket v2.3.  This new version of Jacket brings even greater performance improvements through GPU computing for MATLAB® codes.  (Click here to download v2.3) With v2.3, new support has been added for CUDA 5.0.  This newer version of CUDA enables computation on the latest Kepler K20 GPUs of the NVIDIA Tesla product line. This morning we received an email from a Jacket user who said, “V2.3 + CUDA 5 = wow. Just upgraded and re-ran one of the routines that previously took just under 4 minutes – now less than 2 minutes!” This is a must-have release for all Jacket users.  The performance improvements are generally felt across the board.  Existing Jacket …

Jacket v2.1 Now Available

ScottAnnouncements, CUDA 2 Comments

Optimization Library, Sparse Functionality, Graphics Library Improvements, CUDA 4.1 Enhancements, and much more… AccelerEyes announces the release of Jacket v2.1, adding GPU computing capabilities for use with MATLAB®. Jacket v2.1 delivers even more speed through a host of new improvements, maximizing GPU device performance and utilization.. Notable new features include an Optimization Library and additional functions to our Graphics Library. With Jacket v2.1, we have also extended support for sparse matrix subscripting and made improvements to host-to-device and device-to-host data transfer speeds for complex data. In addition, we have included various GFOR enhancements. Jacket v2.1 now includes NVIDIA CUDA 4.1 enhancements to provide improved functionality and performance (requires latest drivers). Jacket is the premier GPU software plugin for MATLAB®, better than alternative …

AccelerEyes Webinar Series

ScottAnnouncements, CUDA, Events, OpenCL 1 Comment

AccelerEyes invites you to participate in series of webinars designed to help you learn more about Jacket for MATLAB® and ArrayFire for C/C++/Fortran/Python, a comprehensive library of GPU-accelerated functions. GPU Programming for Medical Image Segmentation: January 18, 2012 at 3:00 p.m. EST There’s a huge volume of data generated using acquisition modalities like computer tomography (CT), magnetic resonance imaging (MRI), positron emission tomography or nuclear medicine. A common need is to manipulate and transmit this data using compression techniques in as little time as possible. During this webinar we will show Jacket’s superior speed and handling volumes from subscripting to convolutions.  Come and learn how to accelerate common medical imaging applications using an easy, powerful programming library with Jacket for MATLAB®. OpenCL and CUDA Trade-Offs and Comparison: February 15, 2012 at …

Jacket v2.0 Now Available

ScottAnnouncements, OpenCL Leave a Comment

New Multi-GPU functionality , added support for OpenCL devices, and much more… AccelerEyes announces the release of Jacket version 2.0, adding GPU computing capabilities for use with MATLAB®.  Version 2.0 delivers even more speed through a host of new improvements, maximizing GPU device performance and utilization. Notable new features include a multi-GPU interface and support for OpenCL devices. With Jacket v2.0, your M-code is now portable across all major GPU devices, including AMD/ATI, Intel, and NVIDIA chips. Jacket is the premier GPU software plugin for MATLAB®, better than alternative solutions.  It is relied upon by thousands of organizations for rapid prototyping and problem solving across a range of government, manufacturing, energy, media, biomedical, financial, and scientific research applications. Multi-GPU Details: …

Jacket on Lenovo Systems

ScottAnnouncements, Benchmarks 1 Comment

Lenovo and AccelerEyes have a joint solution for optimizing M code on Lenovo workstations.  The combined HPC solution combines high Intel Xeon CPU performance for daily productivity with unprecedented NVIDIA graphics (GPU) performance for parallel computing with Jacket. Jacket’s comprehensive benchmark suite, when run on Lenovo ThinkStation systems, shows tremendous amounts of speedups for a wide variety of computationally-intensive applications. Jacket is the world’s fastest and broadest GPU software accelerating the M-language commonly found in MATLAB®.  Thousands of customers around the world have used Jacket to accelerate their MATLAB code. Lenovo ThinkStation systems are ideally suited for running real-world high-performance applications using Jacket. While the high-end CPUs are ideal for daily productivity tasks, Jacket and the Quadro GPUs perform HPC …

High Performance Compressive Sensing

ArrayFireBenchmarks, Case Studies Leave a Comment

A few weeks ago, we published a blog entry that demonstrated the ability of Jacket to speed up “compressive sensing”, a technology that has wide applications in areas such as Image processing, reconstruction and spectroscopy. Here, we discuss the work of Nabor Reyna Jr. and Wotao Yin from Rice University using Jacket to speed up “compressive sensing” algorithms in reconstruction. This work deals with reconstruction of signals using partial Fourier matrices (RecPF).  The major computational components of the algorithm involve shrinkage and FFTs.  Jacket is employed to accelerate this compute-heavy code, and the resultant version (gRecPF) was about 5x faster! To reduce the cost involved in generating the random matrices involved in the above method, a second method (RecPC) that …

New Product Updates – Jacket v1.8, LibJacket v1.1

John MelonakosAnnouncements, CUDA Leave a Comment

Announcements Jacket v1.8 for MATLAB® now available LibJacket v1.1 for C/C++/Python/Fortran now available Request a FREE GPU computing consultation Introduction Enhance your code with the fastest, most comprehensive library for GPU computing: Jacket – the best GPU computing in MATLAB®.  Take a tour and compare! LibJacket – the best way to kick start your CUDA development.  Take a tour! Both products enable: Manipulating vectors, matrices, and ND arrays Support for single- and double-precision, boolean, real, and complex numbers Hundreds of routines for arithmetic, linear algebra, statistics, imaging, signal processing, and more (full list: Jacket, LibJacket) Thousands of lines of optimized code for any CUDA-capable GPU New Product Features Expanded support for the Signal Processing, Image Processing, and Statistics Libraries included with …

Using Jacket to design and simulate echo generators

ArrayFireCase Studies Leave a Comment

Antenna array design involves repeated simulation to tune the many parameters involved, and waiting around for simulations to finish is no fun. Offloading the optimization problem onto the GPU cuts that time down significantly. In their recent paper, Capozzoli, Curcio, and Liseno (pdf, citation) of University of Naples Federico II demonstrated how a simple modification to their echo generator array simulation took advantage of the GPU to bring immediate speedups. Checkout this figure from their paper showing CPU simulation time growing prohibitively slow while the GPU grows little as more data is fed. Their simulation is designed around optimizing an energy functional. Using fminunc to drive the optimization problem on the CPU, they simply modified their functional evaluation to take …