Since the 1950s Synthetic aperture radar (SAR) systems have gained extreme popularity in both civilian and military domains due to their all-weather, day-or-night capabilities as well as the ability to render different views of a “target”. However, the raw SAR data (phase-history data) must be preprocessed since all point targets at each pulse instance are superimposed and create a complex interference that is not very useful for target location. SAR image formation algorithms compress this target information in range (frequency) and along-track (azimuth) directions to obtain interpretable images. In the paper titled “SAR image formation toolbox for MATLAB®“, Gorham L.A. and Moore L.J. of the Air Force Research Lab discuss the implementation of the matched filter and backprojection image formation …
Option Pricing
Andrew Shin, Market Risk Manager of Koch Supply & Trading, achieves significant performance increases on option pricing algorithms using Jacket to accelerate his MATLAB® code with GPUs. Andrew says, “My buddy and I are, at best, novice programmers and we couldn’t imagine having to figure out how to code all this in CUDA.” But he found Jacket to be straight-forward. With these results, he says he can see Jacket and GPUs populating Koch’s mark-to-futures cube, which contains its assets, simulations, and simulated asset prices. Modern option pricing techniques are often considered among the most mathematically complex of all applied areas of finance. Andrew shared some exemplary code to demonstrate how much leverage you can get out of Jacket and GPUs for financial computing in MATLAB® and C/C++. …
AccelerEyes Celebrates 5 Years with New Product Releases
AccelerEyes just marked its 5th year in business. What better way to celebrate than by releasing new products! We are pleased to present ArrayFire v1.2 and Jacket v2.2 for NVIDIA CUDA-based GPUs. These new products support the latest Kepler architecture and include an array of new features and performance boosts, especially for image processing functions. Learn more in the ArrayFire release notes and Jacket release notes. AccelerEyes started up in 2007 with the mission to make productive performance accessible to engineers, scientists, and financial analysts. Our core leadership has been to provide great libraries that are easy-to-use and faster than alternative approaches. The coolest part about working at AccelerEyes is getting to play a part in the awesome projects of our …
ArrayFire Examples and Benchmarks Whitepaper
What do you get when you offer the world’s most comprehensive GPU library available for free? Excited users who go the extra mile and give back to the community. Andrzej Chrzȩszczyk from Jan Kochanowski University recently wrote an awesome whitepaper, entitled “Matrix Computations on the GPU with ArrayFire for Python and C/C++.” The whitepaper contains many GPU computing tutorial examples as well as performance timings for each example. Andrzej notes, “The purpose of this document is to make the first steps in using modern graphics cards to general purpose computations simpler.” This document is especially beneficial for programmers looking to accelerate Python or C/C++ codes. We thank Andrzej this fine contribution to the ArrayFire community. His documentation on ArrayFire will be beneficial to all …
No Free Lunch for GPU Compiler Directives
Last week, Steve Scott at NVIDIA put up a viral post entitled, “No Free Lunch for Intel MIC (or GPU’s).” It was a great read and a big hit in technical computing circles. The centrepiece of Scott’s piece was to say that there are no magic compilers. GPUs don’t have them, and neither will MIC. No compiler will be able to automatically recompile existing code and get great performance from MIC or GPUs. Rather, it takes a good amount of elbow grease to write high-performance code. We totally agree. The problem Scott addresses is real. Despite marketing spin to the contrary, developing code for GPUs requires work. However, we don’t agree with Scott’s conclusion that compiler directives are a good solution. You can’t fight …
ArrayFire Pro : Features and Scalability
ArrayFire is a fast GPU library that off-loads compute intensive tasks onto many-core GPUs, thereby reducing application runtime and accelerating it many times. ArrayFire is built on top of NVIDIA CUDA software stack which is currently the best and most stable GPU Software Development Kit available for GPU-based computing. ArrayFire comes with a huge set of functions that span across various domains like image processing, signal processing, financial modeling, applications requiring graphics support. ArrayFire has an array based notation (supports N-dimensional arrays) and allows sub-referencing and assignment into these multi-dimensional arrays. The following code snippet shows how you can index into array objects. // Generate a 3×3 array of random numbers on the GPU array A = randu(3,3); array a1 …
Jacket Continues to Crush the Clone
This morning, I woke up to find the following comment in the MATLAB® Newsgroup: Over two years ago, MathWorks® started to build a clone of Jacket, which you now know as the GPU computing support in the Parallel Computing Toolbox (TM). At the time, there were many naysayers suggesting that Jacket would somehow be eclipsed by the clone. Made sense, right? Wrong! Here we are 2 years later and the clone is still a poor imitation. There are several technical reasons for this, but if you are serious about getting great performance from your GPU, Jacket is the better option. Look at all the real customers that are getting big benefit. Here are some other recent benchmarks from the Walking …
ArrayFire Support for CUDA 4.1
The question above comes from María (@turbonegra). She follows us @accelereyes. Many of you are wondering when ArrayFire support for new CUDA version 4.1 will be released. The answer: work is currently under way. CUDA 4.1 includes a new Fermi compiler, and many people in the GPU ecosystem have reported slowdowns from upgrading to the new CUDA version. So we’ve delayed releasing ArrayFire and Jacket support for CUDA 4.1 because we want to verify performance and reliability across all our unit tests, performance regressions, and customer code samples. Our tests sweep across various driver versions and everything from mobile GeForce cards through server-grade Tesla and Fermi chips. We are still working through the testing and verification at the moment. While …
AccelerEyes Releases ArrayFire GPU Software
A free, fast, and simple GPU library for CUDA and OpenCL devices. AccelerEyes announces the launch of ArrayFire, a freely-available GPU software library supporting CUDA and OpenCL devices. ArrayFire supports C, C++, Fortran, and Python languages on AMD, Intel, and NVIDIA hardware. Learn more by visiting the ArrayFire product page. “ArrayFire is our best software yet and anyone considering GPU computing can benefit,” says James Malcolm, VP Engineering at AccelerEyes. “It is fast, simple, GPU-vendor neutral, full of functions, and free for most users.” Thousands of paying customers currently enjoy AccelerEyes’ GPU software products. With ArrayFire, everyone developing software for GPUs has an opportunity to enjoy these benefits without the upfront expense of a developer license. Reasons to use ArrayFire: …
Fast Computer Vision with OpenCV and ArrayFire
Update: While the post below discusses LibJacket (no longer a product), you can do the same thing in the newer, but different, ArrayFire library. Improved performance benchmarks and a simpler API are the results of moving from LibJacket to ArrayFire. Mcclanahoochie just posted some code and instructions for pairing OpenCV with LibJacket to get accelerated computer vision. You can do really fast image processing on video cam feeds too, see picture below: Really cool stuff. Computer vision is really hot with applications emerging in defense, radiology, games, automotive, and other consumer applications. Computer vision algorithms like these are also going mobile. For instance, we have started to build LibJacket for Mobile applications, which runs on Tegra, PowerVR, and other mobile …