ArrayFire Examples and Benchmarks Whitepaper

ArrayFireArrayFire Leave a Comment

What do you get when you offer the world’s most comprehensive GPU library available for free? Excited users who go the extra mile and give back to the community. Andrzej Chrzȩszczyk from Jan Kochanowski University recently wrote an awesome whitepaper, entitled “Matrix Computations on the GPU with ArrayFire for Python and C/C++.” The whitepaper contains many GPU computing tutorial examples as well as performance timings for each example. Andrzej notes, “The purpose of this document is to make the first steps in using modern graphics cards to general purpose computations simpler.”  This document is especially beneficial for programmers looking to accelerate Python or C/C++ codes. We thank Andrzej this fine contribution to the ArrayFire community.  His documentation on ArrayFire will be beneficial to all …

AccelerEyes is Hiring at GTC 2012

John MelonakosAnnouncements, Events Leave a Comment

Do you want to code GPUs daily?  Do you want to build software that actually gets used by real people, solving real problems?  Do you want to join the whirlwind of a startup where you own projects and determine success or failure? Then come work at AccelerEyes.  AccelerEyes is hiring for 3 positions:  Inside Salespersons, Fulltime Engineers, and Remote Contract Developers. Checkout our Careers page or swing by our booth at GTC for more info.

Top 10 List at GTC 2012

John MelonakosAnnouncements, Events Leave a Comment

It’s going to be hard to sleep tonight.  So much GPU goodness awaits the coming 3 days of the GPU Technology Conference.  Here are my top 10 things to do at GTC 2012: Sessions to Attend #1:  S0287 – Jacket for Multidimensional Scaling in Genomics – This is a great opportunity to learn about accelerating MATLAB® on the GPU.  Come learn why thousands of scientists, engineers, and analysts are using Jacket to do more with less coding hassle. (Day: Tuesday, 05/15; Time: 5:30 pm – 5:55 pm; Location: Room K) #2:  S0415 – An Accelerated Weeks Method for Numerical Laplace Transform Inversion – Learn how the researchers have been able to utilize Jacket in MATLAB® to more efficiently and robustly implement the Weeks method. (Day: Wednesday, 05/16; Time: 9:30 …

Benchmarking the new Kepler (GTX 680)

Pavan YalamanchiliBenchmarks, CUDA 13 Comments

NVIDIA has launched their next generation GPU based on their Kepler Architecture. They followed it up with a rather quick update to their CUDA toolkit. Considering that we have access to 3 generations of their GTX cards (480, 580 and 680), we thought we would show case how the performance has changed over the generations. Matrix multiplication: It can be seen that the GTX 680 breaches the 1 Terraflop mark comfortably for single precision, while the GTX 580 barely scratches it. However the performance seems to peak around 2048 x 2048 and then rallies downward to match the performance of the GTX 580 at larger sizes. The high end Tesla C2070 finishes last for single precision behind the third placed …

ArrayFire for Defense and Intelligence Applications

ArrayFireC/C++, Case Studies, CUDA, Events, Fortran Leave a Comment

In case you missed it, we recently held a webinar on the ArrayFire GPU Computing Library and its applications to Defense and Intelligence functions. Defense projects often have hard deadlines and definite speed targets, and ArrayFire is a fast and easy-to-use choice for these applications. This webinar was part of an ongoing series of webinars that will help you learn more about the many applications of Jacket and ArrayFire, while interacting with AccelerEyes GPU computing experts.  John Melonakos, our CEO, introduced ArrayFire and talked about some exciting recent customer successes in the field of defense. He then ran through the mechanics of compiling and running code on a machine with 2 Quadro 6000 GPUs, and talked about customer success stories. …

No Free Lunch for GPU Compiler Directives

John MelonakosArrayFire, C/C++, CUDA, Fortran 3 Comments

Last week, Steve Scott at NVIDIA put up a viral post entitled, “No Free Lunch for Intel MIC (or GPU’s).”  It was a great read and a big hit in technical computing circles. The centrepiece of Scott’s piece was to say that there are no magic compilers.  GPUs don’t have them, and neither will MIC.  No compiler will be able to automatically recompile existing code and get great performance from MIC or GPUs.  Rather, it takes a good amount of elbow grease to write high-performance code. We totally agree.  The problem Scott addresses is real.  Despite marketing spin to the contrary, developing code for GPUs requires work. However, we don’t agree with Scott’s conclusion that compiler directives are a good solution. You can’t fight …

Jacket v2.1 Now Available

ScottAnnouncements, CUDA 2 Comments

Optimization Library, Sparse Functionality, Graphics Library Improvements, CUDA 4.1 Enhancements, and much more… AccelerEyes announces the release of Jacket v2.1, adding GPU computing capabilities for use with MATLAB®. Jacket v2.1 delivers even more speed through a host of new improvements, maximizing GPU device performance and utilization.. Notable new features include an Optimization Library and additional functions to our Graphics Library. With Jacket v2.1, we have also extended support for sparse matrix subscripting and made improvements to host-to-device and device-to-host data transfer speeds for complex data. In addition, we have included various GFOR enhancements. Jacket v2.1 now includes NVIDIA CUDA 4.1 enhancements to provide improved functionality and performance (requires latest drivers). Jacket is the premier GPU software plugin for MATLAB®, better than alternative …

12,288 CUDA Cores in One Computer

John MelonakosAnnouncements, CUDA 3 Comments

Kepler is here.  And it’s fantastic! The news came out today that the first Kepler GPU, the GeForce GTX 680, has been launched.  A single GPU has 1,536 CUDA Cores.  This means that those high-end workstations with 8 PCIe slots will be able to pack 12,288 CUDA cores into a single computer.  That’s some serious computational power. Current high-end Fermi cards have 512 cores, so this new Kepler architecture boasts 3X the number of computation cores. Normally we focus on the higher-end Tesla products because those more aptly fit the needs of our science, engineering, and financial computing readers.  But we are excited nonetheless by this GeForce GPU.  It is a major step forward in GPU technology.  And this GeForce card portends …

ArrayFire for Financial Computing Applications

ArrayFireCase Studies, Events Leave a Comment

In case you missed it, we recently held a webinar on how to accelerate financial computing applications using Jacket.  The performance advantages brought to financial computing algorithms through Jacket and GPUs represents the best way to accelerate MATLAB® code. This webinar was part of an ongoing series of webinars that will help you learn more about the many applications of Jacket and ArrayFire, while interacting with AccelerEyes GPU computing experts.  Scott Blakeslee, our Director of Business Development, introduced Jacket and talked about some exciting recent customer successes in the field of financial computing. Gallagher Pryor, CTO of AccelerEyes, then demoed some financial code speedups on one of our office machines. The major takeaway from the webinar video was that Jacket is …