We are pleased to announce today that MathWorks and AccelerEyes have started working together to provide the best overall solution for GPU computing in MATLAB® through the Parallel Computing Toolbox™ and MATLAB Distributed Computing Server™ from MathWorks. This new relationship will result in great product updates for end users of the Parallel Computing Toolbox™ and MATLAB Distributed Computing Server™. Since 2007, AccelerEyes has been a leader in developing GPU software, including Jacket. AccelerEyes has sold Jacket as a 3rd-party add-on to the MathWorks MATLAB® product. Effective today, AccelerEyes will discontinue new Jacket product sales. All existing Jacket license holders will continue to receive support and maintenance from AccelerEyes for 1 year. All existing Jacket licenses are perpetual and will not expire. Future GPU computing updates …
Jacket v2.3 Now Available
We are pleased to announce the new release of Jacket v2.3. This new version of Jacket brings even greater performance improvements through GPU computing for MATLAB® codes. (Click here to download v2.3) With v2.3, new support has been added for CUDA 5.0. This newer version of CUDA enables computation on the latest Kepler K20 GPUs of the NVIDIA Tesla product line. This morning we received an email from a Jacket user who said, “V2.3 + CUDA 5 = wow. Just upgraded and re-ran one of the routines that previously took just under 4 minutes – now less than 2 minutes!” This is a must-have release for all Jacket users. The performance improvements are generally felt across the board. Existing Jacket …
Powering Mars Research
The Curiosity Mars rover landing reminded us of a recent talk by Brendan Babb of NASA and UAA in Anchorage about Jacket-accelerated Mars research. The talk was given at GTC 2012 in May. The main thrust of this research is improving mars rover image compression via GPUs and genetic algorithms. With Jacket and GPUs, the researchers were able to achieve 5X speedups on the larger data sizes. The algorithm works by pairing neighboring pixels with a random one and then adjusting the random pixel based on whether it incrementally improves the original image. Babb described the algorithm as an “embarrassingly” parallel process, ideally suited to GPU acceleration. He estimates he has been able to achieve a 20 to 30 percent error …
AccelerEyes Celebrates 5 Years with New Product Releases
AccelerEyes just marked its 5th year in business. What better way to celebrate than by releasing new products! We are pleased to present ArrayFire v1.2 and Jacket v2.2 for NVIDIA CUDA-based GPUs. These new products support the latest Kepler architecture and include an array of new features and performance boosts, especially for image processing functions. Learn more in the ArrayFire release notes and Jacket release notes. AccelerEyes started up in 2007 with the mission to make productive performance accessible to engineers, scientists, and financial analysts. Our core leadership has been to provide great libraries that are easy-to-use and faster than alternative approaches. The coolest part about working at AccelerEyes is getting to play a part in the awesome projects of our …
Top 10 List at GTC 2012
It’s going to be hard to sleep tonight. So much GPU goodness awaits the coming 3 days of the GPU Technology Conference. Here are my top 10 things to do at GTC 2012: Sessions to Attend #1: S0287 – Jacket for Multidimensional Scaling in Genomics – This is a great opportunity to learn about accelerating MATLAB® on the GPU. Come learn why thousands of scientists, engineers, and analysts are using Jacket to do more with less coding hassle. (Day: Tuesday, 05/15; Time: 5:30 pm – 5:55 pm; Location: Room K) #2: S0415 – An Accelerated Weeks Method for Numerical Laplace Transform Inversion – Learn how the researchers have been able to utilize Jacket in MATLAB® to more efficiently and robustly implement the Weeks method. (Day: Wednesday, 05/16; Time: 9:30 …
Jacket v2.1 Now Available
Optimization Library, Sparse Functionality, Graphics Library Improvements, CUDA 4.1 Enhancements, and much more… AccelerEyes announces the release of Jacket v2.1, adding GPU computing capabilities for use with MATLAB®. Jacket v2.1 delivers even more speed through a host of new improvements, maximizing GPU device performance and utilization.. Notable new features include an Optimization Library and additional functions to our Graphics Library. With Jacket v2.1, we have also extended support for sparse matrix subscripting and made improvements to host-to-device and device-to-host data transfer speeds for complex data. In addition, we have included various GFOR enhancements. Jacket v2.1 now includes NVIDIA CUDA 4.1 enhancements to provide improved functionality and performance (requires latest drivers). Jacket is the premier GPU software plugin for MATLAB®, better than alternative …
GPU Computing with Jacket in Automated Trader
The Q1 2012 issue of Automated Trader contains an excellent “Mashup!” piece reviewing software for algorithmic trading. The article provides a wonderful glimpse into the 1-2 month adventure of Andy Webb, Automated Trader’s Founder, and Wrecking Crew building a fast trading platform from several technologies. We heartily recommend that those of you in financial computing go subscribe to get the full story and access to ongoing developments from these Automated Trader thought leaders! The full trading platform they built was quite extensive. The part that caught our eye was the core computational component of the pipeline. That component involved permuting 1,000 potential pairs with cointegration tests for 350 time windows on each potential pair. The single core MATLAB® version took 70 minutes …
Jacket Continues to Crush the Clone
This morning, I woke up to find the following comment in the MATLAB® Newsgroup: Over two years ago, MathWorks® started to build a clone of Jacket, which you now know as the GPU computing support in the Parallel Computing Toolbox (TM). At the time, there were many naysayers suggesting that Jacket would somehow be eclipsed by the clone. Made sense, right? Wrong! Here we are 2 years later and the clone is still a poor imitation. There are several technical reasons for this, but if you are serious about getting great performance from your GPU, Jacket is the better option. Look at all the real customers that are getting big benefit. Here are some other recent benchmarks from the Walking …
ArrayFire Support for CUDA 4.1
The question above comes from María (@turbonegra). She follows us @accelereyes. Many of you are wondering when ArrayFire support for new CUDA version 4.1 will be released. The answer: work is currently under way. CUDA 4.1 includes a new Fermi compiler, and many people in the GPU ecosystem have reported slowdowns from upgrading to the new CUDA version. So we’ve delayed releasing ArrayFire and Jacket support for CUDA 4.1 because we want to verify performance and reliability across all our unit tests, performance regressions, and customer code samples. Our tests sweep across various driver versions and everything from mobile GeForce cards through server-grade Tesla and Fermi chips. We are still working through the testing and verification at the moment. While …
AccelerEyes Webinar Series
AccelerEyes invites you to participate in series of webinars designed to help you learn more about Jacket for MATLAB® and ArrayFire for C/C++/Fortran/Python, a comprehensive library of GPU-accelerated functions. GPU Programming for Medical Image Segmentation: January 18, 2012 at 3:00 p.m. EST There’s a huge volume of data generated using acquisition modalities like computer tomography (CT), magnetic resonance imaging (MRI), positron emission tomography or nuclear medicine. A common need is to manipulate and transmit this data using compression techniques in as little time as possible. During this webinar we will show Jacket’s superior speed and handling volumes from subscripting to convolutions. Come and learn how to accelerate common medical imaging applications using an easy, powerful programming library with Jacket for MATLAB®. OpenCL and CUDA Trade-Offs and Comparison: February 15, 2012 at …