This is the third in a series of posts looking at our current ArrayFire examples. The code can be compiled and run from arrayfire/examples/ when you download and install the ArrayFire library. Today we will discuss the examples found in the financial/ directory. In these examples, my machine has the following configuration: ArrayFire v1.9 (build XXXXXXX) by AccelerEyes (64-bit Linux) License: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX CUDA toolkit 5.0, driver 304.54 GPU0 Quadro 6000, 6144 MB, Compute 2.0 (single,double) Display Device: GPU0 Quadro 6000 Memory Usage: 5549 MB free (6144 MB total)… Black-Scholes There are a number of applications of ArrayFire and GPU programming in the world of finance and markets. Here we have an example of Black-Scholes, which is a model for computing options prices in the stock market. Understanding …
7 Highlights of GTC 2013 – Day 4 of 4
Day 4 at GTC is always a little less hyped than the first 3 days, but it is when some of the best sessions are found. Here are 7 of the highlights we’ve collected from our team on the last day of GTC 2013: Paulius Micikevicius of NVIDIA gave a great talk entitled, “Performance Optimization: Programming Guidelines and GPU Architecture Details Behind Them.” It was so great, we have 2 highlights from this talk. The first Paulius highlight is the information about how instruction level parallelism is essential to fully take advantage of Kepler GPUs. Paulius gave a clear presentation on these difficult concepts. The second Paulius highlight is the thorough treatment of memory hierarchy for Kepler. It is very detailed and …
7 Highlights of GTC 2013 – Day 3 of 4
Day 3 at GTC was awesome. It was super hard to narrow down our list to just 7 highlights. For instance, the stress ball pyramid in our booth does not count. Neither does the massive ArrayFire poster in front of the keynote hall. Here are 7 of the highlights we’ve collected from our team on the third day of GTC 2013: Professor Erez Lieberman Aiden of Baylor and Rice Universities gave a great keynote on “Parallel Processing of the Genomes, by the Genomes and for the Genomes.” He discussed how folding of genes and interactions between multiple folded genes can impact genetic expressions. It’s not just about the composition of the gene, but also how the gene folds. It turns …
7 Highlights of GTC 2013 – Day 1 of 4
AccelerEyes is out in force at GTC. We ended up with 10 of our engineers and sales staff here onsite. I collected feedback from the team to learn what people enjoyed the most from today’s activities. Here are 7 of the highlights we’ve collected from our team on the first day of GTC 2013: Will Ramey of NVIDIA kicked off GTC with a tutorial on the CUDA ecosystem. He talked about the three different approaches to getting GPU acceleration: 1) Libraries, 2) Compiler Directives, and 3) Programming Languages. He talked about how libraries, if you can find one for your application (hint, hint), are the best of the 3 options, because you get great performance and you don’t have to …
Giddy for GTC – We’re Taking it to the Next Level
GTC is quickly approaching and AccelerEyes is giddy with excitement! This year we are taking things to the next level as a Silver Sponsor at GTC 2013. Meaning, you’ll be seeing a lot more of us throughout the conference! Schedule a Meeting with Us Do you want to meet with us personally? Schedule a time to sit down with AccelerEyes engineers and account representatives using our online scheduler. Visit our Booth If you’re attending GTC, be sure to come visit us at booth #204 to see some great demos or to chat with anyone in our Software Shop for CUDA & OpenCL. Come see how ArrayFire complements other GPU development efforts, including raw CUDA/OpenCL development, OpenACC, and other GPU libraries. Register …
ArrayFire Examples (Part 2 of 8) – Benchmarks
This is the second in a series of posts looking at our current ArrayFire examples. The code can be compiled and run from arrayfire/examples/ when you download and install the ArrayFire library. Today we will discuss the examples found in the benchmarks/ directory. In these examples, my machine has the following configuration: ArrayFire v1.9 (build XXXXXXX) by AccelerEyes (64-bit Linux) License: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX CUDA toolkit 5.0, driver 304.54 GPU0 Quadro 6000, 6144 MB, Compute 2.0 (single,double) Display Device: GPU0 Quadro 6000 Memory Usage: 5549 MB free (6144 MB total)… Blas This example shows a simple bench-marking process using ArrayFire’s matrix multiply routine. For more information on Blas, click here. The data measured in this example is the Giga-Flop (GFLOP Floating Point Operations Per Second). I got the following results using …
GTC 2013 Tutorial – CUDA Accelerated Image Processing Libraries
The 2013 GPU Technology Conference is just two weeks away. We’re super excited. We’re spending a lot of time preparing for our tutorial on CUDA Accelerated Image Processing Libraries. We think it will be well worth your while to attend. This is an 80-minute share all about CUDA image processing from James Malcolm, an AccelerEyes co-founder and lead engineer. You will walk away from the tutorial much better prepared to build fast computer vision and image processing codes. The session abstract is as follows: Image processing has consistently proven to benefit greatly from GPU acceleration. A number of libraries available from NVIDIA and AccelerEyes make image processing development efficient and lead to big speedups. Using these libraries can often significantly shorten …
ArrayFire Examples (Part 1 of 8) – Getting Started
This is the first in a series of posts looking at our current ArrayFire examples. The code can be compiled and run from arrayfire/examples/ when you download and install the ArrayFire library. Today we will discuss the examples found in the getting_started/ directory. Hello World Of course we start with the classic “Hello World” example, which walks you through the basics of using the ArrayFire library. Running this example will print out system and device information, as well as perform some basic matrix operations. This is a good place to get familiar with the basic data container for ArrayFire – the array. ArrayFire v1.9 (build XXXXXXX) by AccelerEyes (64-bit Linux) License: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX CUDA toolkit 5.0, driver 304.54 GPU0 Quadro 6000, 6144 …
Benchmarking Tesla K20
In this blog post, we are going to compare NVIDIA’s latest high end offering, the Tesla K series (PDF) with their previous offering. In particular we are comparing the Tesla K20C with Tesla C2070/2075. This blog post follows a similar post about benchmarking the GTX680 we did last year. We take a look at similar set of functions (and a little bit more) to see what benefits the newer line brings. All of the benchmarks were done using double precision. In all of the graphs, higher trendlines are better. Matrix Multiplication In house at AccelerEyes, we use matrix multiplication as the gold standard for testing the maximum performance of all new GPUs we end up with. The K20c reaches a peak at …
7 Tips for CUDA & OpenCL Programming and How ArrayFire Helps
In order to get the best performance from your CUDA or OpenCL code, it is helpful to keep in mind some useful tips for optimizing performance. Note: By “accelerator” we refer to GPUs, APUs, co-processors, FPGAs, and any devices capable of running CUDA or OpenCL. Vectorized Code: Accelerators perform best with vectorized code because the computations map naturally onto arithmetic cores of the hardware. ArrayFire functions are inherently vectorized, so if you are using ArrayFire, you are writing vectorized code. Memory Transfers: Avoid excessive memory transfers. Each casting operation to and from the accelerator moves data back and forth between CPU memory and accelerator memory. ArrayFire makes many automatic optimizations to minimize these memory transfers by only transferring data when …