Benchmarking parallel vector libraries

Pavan ArrayFire, Benchmarks, C/C++, CUDA Leave a Comment

There are many open source libraries that implement parallel versions of the algorithms in the C++ standard template libraries. Inevitably we get asked questions about how ArrayFire compares to the other libraries out in the open. In this post we are going to compare the performance of ArrayFire to that of BoostCompute, HSA-Bolt, Intel TBB and Thrust. The benchmarks include the following commonly used vector algorithms across 3 different architectures. Reductions Scan Transform The following setup has been used for the benchmarking purposes. The code to reproduce the benchmarks is linked at the bottom of the post. The hardware used for the benchmarks is listed below: NVIDIA Tesla K20 AMD FirePro S10000 Intel Xeon E5-2560v2 Background ArrayFire ArrayFire provides high ...

Feature detection and tracking using ArrayFire

Brian Kloppenborg ArrayFire, C/C++, Image Processing Leave a Comment

A few weeks ago we added some computer vision functionality to our open source ArrayFire GPU computing library. Specifically, we implemented the FAST feature extractor, BRIEF feature point descriptor, ORB multi-resolution scale invariant feature extractor, and a Hamming distance function. When combined, these functions enable you to find features in videos (or images) and track them between successive frames.

Using zero-copy buffers on integrated GPUs

Brian Kloppenborg C/C++, OpenCL 1 Comment

One of the most powerful aspects of parallel program on integrated GPUs is taking advantage of shared memory and caches. The best example of this is sharing common data between the CPU and GPU via. zero-copy buffers. This technique permits your program to avoid the O(N) cost of copying data to/from the GPU. This feature is particularly useful for applications that deal with real-time data streams, like video processing.

Building a C++ Interpreter (Cling) for x86 and ARM

Gallagher Pryor C/C++ Leave a Comment

A C++ interpreter compiled for ARM running on an x86 EC2 instance after following the given instructions. As C++ becomes more mature and high-level, an interpreted workflow might lead to mainstream C++ productivity in addition to development! Development through a C++ interpreter (Cling) as opposed to a standard compiler is an amazing leap in productivity and a window into the newest features of C++. This post tells you how to get your own bleeding-edge C++ interpreter built right on top of the development version of LLVM. We give you a repeatable procedure via Amazon EC2. With our prescribed steps in place, you can always have an up-to-date development version of Cling. This allows quick testing and investigation of LLVM's newest ...

Demystifying PTX Code

Peter C/C++, CUDA, OpenCL 3 Comments

In my recent post, I showed how to generate PTX files from both CUDA and OpenCL kernels. In this post I will address the issue of how a PTX file look, and more importantly, how to understand all those complicated instructions in a PTX files. In this post I will use the same vector addition kernel from the the previous post previous post (the complete code can be found here). For this post, I will focus on OpenCL PTX file. In a future post I will discuss the differences between PTX files of OpenCL and CUDA code. Let's start by looking at the complete PTX code:

The file starts with a header showing some compiler information in comments, followed ...

Cross Compile to Windows From Linux

Gallagher Pryor ArrayFire, C/C++ 7 Comments

Why did I not know about this? It's like I just discovered the screw driver! On Debian and variants (from tinc's windows cross-compilation page),

Granted, this isn't a silver bullet, but rather a quick way to get a Windows build of platform independent code that you might already have running in Linux. I've found that this approach makes it easy to get binaries out the door in a hurry when it's hard to get a project building with Visual Studio or even on the Windows platform itself (due to, say, a complex build system). I cover some quick solutions to the most common caveats I've run into, below. Caveats MinGW GCC vc GCC: Not as Smart with Templates Whatever ...

Image editing using ArrayFire: Part 3

Pradeep ArrayFire, C/C++, CUDA, Image Processing, OpenCL 1 Comment

Today, we will be doing the third post in our series Image editing using ArrayFire. References to old posts are available below. * Part 1 * Part 2 In this post, we will be looking at the following operations. Image Histogram Simple Binary Theshold Otsu Threshold Iterative Threshold Adaptive Binary Threshold Emboss Filter Today's post will be mostly dominated by different types of threshold operations we can achieve using ArrayFire. Image Histogram We have a built-in function in ArrayFire that creates a histogram. The input image was converted to gray scale before histogram calculation as our histogram implementation works for vector and 2D matrices only. In case, you need histogram for all three channels of a color image, you can ...

Quest for the Smallest OpenCL Program

Umar C/C++, OpenCL 8 Comments

I have heard many complaints about the verbosity of the OpenCL API. This claim in not unwarranted. The verbosity is due to the low level nature of OpenCL. It is written in the C programming language; the lingua franca of programming languages. While this allows you to run an OpenCL program on virtually any platform, it has some disadvantages. In a typical OpenCL program must: Query for the platform Get the device IDs from the platform Create a context from a set of device IDs Create a command queue from the context Create buffer objects for your data Transfer the data to the buffer Create and build a program from source Extract the kernels Launch the kernels Transfer the data to ...

ArrayFire Capability Update - July 2014

Oded Android, ArrayFire, C/C++, CUDA, Fortran, JAVA, OpenCL, R 1 Comment

In response to user requests for additional ArrayFire capabilities, we have decided to extend the library to have CPU fall back when OpenCL drivers for CPUs are not available. This means that ArrayFire code will be portable to both devices that have OpenCL setup and devices without it. This is done through the creation of additional backends. This will allow ArrayFire users to write their code once and have it run on multiple systems. We currently support the following systems and architectures: NVIDIA GPUs (Tesla, Fermi, and Kepler) AMD's GPUs, CPUs and APUs Intel's CPUs, GPUs and Xeon Phi Co-Processor Mobile and Embedded devices As part of this update process we are also looking at extending ArrayFire capabilities to low power systems such ...

How to write vectorized code

Pavan ArrayFire, C/C++ Leave a Comment

Programmers and Data Scientists want to take advantage of fast and parallel computational devices. Writing vectorized code is becoming a necessity to get the best performance out of the current generation parallel hardware and scientific computing software. However, writing vectorized code may not be intuitive immediately. There are many ways you can vectorize a given code segment. Each method has its own benefits and drawbacks. Hence, writing vectorized code involves analyzing the pros and cons of the available methods and choosing the right one to solve your problem. In this post, I present various ways to vectorize your code using ArrayFire. ArrayFire is chosen because of my familiarity with the software. The same methods can be easily used in numpy, octave, ...