New Look, Same Acceleration Gang

Aaron TaylorAnnouncements, ArrayFire Leave a Comment

We have officially rebranded from AccelerEyes to ArrayFire! Our rebranding includes a website redesign, improved documentation, and–bonus–an upcoming release of a new version of the ArrayFire software library. We have even more innovations waiting in the wings, and we are optimistic of a bright future under our new banner! Please don’t hesitate to contact us if you have any questions about this transition–we’re happy to help you find what you’re looking for and to assist in whatever ways we can. If you want faster code, you’ve come to the right place! -The ArrayFire Gang

ArrayFire-OpenGL Interop using CUDA

Shehzan MohammedArrayFire, CUDA, OpenGL Leave a Comment

A lot of ArrayFire users have been interested in the usage of ArrayFire in partnership with OpenGL for graphics computation. In the long run, we do plan to expand further on the interoperablilty and make it easier through ArrayFire. For now, we have developed a small example to expand on the usage of the CUDA-OpenGL interop API to assist in the interop operations between ArrayFire and OpenGL. Some of the advantage of direct ArrayFire-OpenGL interop are: Faster data transfers: Since the OpenGL buffers as well as ArrayFire data reside on the GPU, we can use a direct device to device copy rather than using the CPU as an intermediate and the relatively slow PCIe interface. Offscreen rendering: It is commonly …

ArrayFire v2.0 Official Release

ScottAnnouncements, ArrayFire, CUDA, OpenCL 1 Comment

We are thrilled to announce the official release ArrayFire v2.0, our biggest and best product ever! ArrayFire v2.0 adds full commercial support for OpenCL devices including all AMD APUs and AMD FireProTM graphics, CUDA GPUs from NVIDIA, and other OpenCL devices from Imagination, Freescale, ARM, Intel, and Apple. ArrayFire is a CUDA and OpenCL library designed for maximum speed without the hassle of writing time-consuming CUDA and OpenCL device code.  With ArrayFire’s library functions, developers can maximize productivity and performance. Each of ArrayFire’s functions has been hand-tuned by CUDA and OpenCL experts. Announcing ArrayFire for OpenCL Support for all of ArrayFire’s function library (with a few exceptions) Same API as ArrayFire for CUDA enabling seamless interoperability Just-In-Time (JIT) compilation of …

ARM Showcases ArrayFire OpenCL Support for Mali GPU at Supercomputing ’13

ScottArrayFire, Events, OpenCL Leave a Comment

ARM showcased ArrayFire support for the Mali GPU at the Supercomputing ’13 conference recently held in Denver.  This exciting development caught the attention of many attendees as they viewed the ArrayFire demos running in the ARM and AccelerEyes exhibits.   Energy budgets are always constrained, and form an expensive component of any HPC system. ARM Mali GPUs provide the best performance and throughput for a given energy envelope. Partnering with ARM, AccelerEyes further reduces the cost of HPC by minimizing development time and costs. AccelerEyes offers the most productive software solutions for accelerating code using GPUs, coprocessors, and OpenCL devices.  AccelerEyes delivers ArrayFire to accelerate C, C++, and Fortran codes on CUDA and OpenCL devices.  ArrayFire customers come from a wide range …

Partners Magnify the SC13 Experience

John MelonakosArrayFire, Events 1 Comment

Yesterday, we posted photos from our exhibit. Today was the last day of SC13, and we want to tip our hat to the wonderful partners that magnified our SC13 experience. Creative Consultants, Mellanox, and Allinea Creative Consultants ran an ArrayFire demo across several nodes using Mellanox interconnect. The demo was a multi-node, multi-GPU lattice boltzmann simulation. Allinea also showcased their debugging and profiling tools on the same ArrayFire based code. AMD ArrayFire OpenCL demos were showcased in the AMD exhibit. It was great to see momentum from AMD at SC13 carried over from the previous week’s APU13 conference. Microway In the photo below, you can see ArrayFire running on Microway’s WhisperStation. Microway had prime real estate at the conference and surely every …

Photos from SC13

John MelonakosArrayFire, CUDA, Events, OpenCL Leave a Comment

SC13 was awesome this week! Tomorrow is the last day of the exhibition. For those of you that did not make it to the show, here are some pictures from our exhibit: The AccelerEyes Booth ——————————————————————————————————– ArrayFire OpenCL Demo on ARM Mali ——————————————————————————————————– ArrayFire CUDA Demo on NVIDIA K40 ——————————————————————————————————– ArrayFire OpenCL Demo on Intel Xeon Phi Coprocessor ——————————————————————————————————– ArrayFire OpenCL Demo on AMD FirePro GPU ——————————————————————————————————– It was a great show and wonderful to see so many ArrayFire users in person. If you could not attend and would like to learn more about our CUDA or OpenCL products or services, let us know! Related articles ArrayFire v2.0 Release Candidate Now Available for Download Two Kinds of Exhibits to Watch …

ArrayFire v2.0 Release Candidate Now Available for Download

Aaron TaylorAnnouncements, ArrayFire, CUDA, OpenCL Leave a Comment

ArrayFire v2.0 is now available for download. The second iteration of our free, fast, and simple GPU library now supports both CUDA and OpenCL devices. Major Updates ArrayFire now works on OpenCL enabled devices New and improved documentation Optimized for new GPUs–NVIDIA Kepler (K20) and AMD Tahiti (7970) New in ArrayFire OpenCL Same APIs as ArrayFire CUDA version Supports both Linux and Windows Just In Time Compilation (JIT) of kernels Parallel for: gfor Accelerated algorithms in the following domains Image Processing Signal Processing Data Analysis and Statistics Visualization And more New in ArrayFire CUDA New Signal and Image processing functions Faster transpose and matrix multiplication Better debugging support for GDB and Visual Studio Bug fixes to make overall experience better For a more complete list of  the …

Application Time vs Solver Time

John MelonakosArrayFire, Computing Trends Leave a Comment

Last week, HPCwire ran an interesting article entitled, “Where has HPC’s math gone?” The article analyzes the increasing importance of math solvers to successful HPC outcomes. As the number of cores grows, the percentage of time HPC codes spend in solvers increases significantly. The following chart illustrates this trend nicely:   ArrayFire is ideally suited for HPC applications that need to accelerate the toughest math problems. ArrayFire contains hundreds of math functions across numerous domains. In general, if the HPC community really wants to solve this problem, it will begin to invest more in libraries than in compilers that have no chance at optimizing these tough math problems automatically. Rather, it is only through expertly-tuned codes, such as those developed …

ArrayFire Examples (Part 7 of 8) – PDE

ArrayFireArrayFire, CUDA Leave a Comment

This is the seventh in a series of posts looking at our current ArrayFire examples. The code can be compiled and run from arrayfire/examples/ when you download and install the ArrayFire library. Today we will discuss the examples found in the pde/ directory. In these examples, my machine has the following configuration: ArrayFire v1.9.1 (build XXXXXXX) by AccelerEyes (64-bit Linux) License: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX CUDA toolkit 5.0, driver 319.17 GPU0 Tesla K20c, 5120 MB, Compute 3.5 (current) GPU1 Tesla C2075, 6144 MB, Compute 2.0 GPU2 Tesla C1060, 4096 MB, Compute 1.3 Display Device: GPU0 Tesla K20c Memory Usage: 5044 MB free (5120 MB total)   The followings are the examples of formulating Partial Differential Equations, generally used to create a relevant computer model with several variables. In these examples, …

ArrayFire Examples (Part 6 of 8) – Multiple GPUs

ArrayFireArrayFire, CUDA Leave a Comment

This is the sixth in a series of posts looking at our current ArrayFire examples. The code can be compiled and run from arrayfire/examples/ when you download and install the ArrayFire library. Today we will discuss the examples found in the multi_gpu/ directory. In these examples, my machine has the following configuration: ArrayFire v1.9.1 (build XXXXXXX) by AccelerEyes (64-bit Linux) License: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX CUDA toolkit 5.0, driver 319.17 GPU0 Tesla K20c, 5120 MB, Compute 3.5 (current) GPU1 Tesla C2075, 6144 MB, Compute 2.0 GPU2 Tesla C1060, 4096 MB, Compute 1.3 Memory Usage: 4935 MB free (5120 MB total) *The following order represents the speed of GPUs in my machine from fastest to slowest: K20c, C2070, C1060. ArrayFire is capable of multi-GPU management. This capability becomes useful for benchmarking …