Machine Learning with ArrayFire

ArrayFireBenchmarks, C/C++, Case Studies, CUDA, Events Leave a Comment

In case you missed it, we recently held a webinar on the ArrayFire GPU Computing Library and its applications to Machine Learning on June 15. This webinar was part of a free series of webinars that help you learn about ArrayFire and Jacket (our MATLAB® product). Anyone can attend these webinars, for they are absolutely free and open for anyone to attend and interact with AccelerEyes engineers. Learn more at http://www.accelereyes.com/webinars. Chris, a Software Engineer at AccelerEyes, explained ArrayFire’s position in the GPU computing world, and presented benchmarks where ArrayFire beats GPU libraries such as Thrust in many critical applications. He also mentioned that ArrayFire could be used either standalone, or in combination with other options for GPU computing such …

Hiring Tons

John MelonakosAnnouncements Leave a Comment

Join the hottest GPU software company. We’re rapidly expanding and looking for talented developers who are passionate about making the programming world more efficient. The things we work on at AccelerEyes provide orders of magnitude more productivity for other developers, greatly increasing the amount of science, engineering, and analytics which are produced each year, across the globe, and across every technical computing industry. Specifically, we are looking to hire many developers in the following two roles: Application Engineering – the most vital job. It requires an ability to produce applications in a variety of disciplines, such as healthcare, finance, oil & gas, defense, etc). You will be the most expert users of ArrayFire and Jacket, and will spread your understanding …

Top 10 List at GTC 2012

John MelonakosAnnouncements, Events Leave a Comment

It’s going to be hard to sleep tonight.  So much GPU goodness awaits the coming 3 days of the GPU Technology Conference.  Here are my top 10 things to do at GTC 2012: Sessions to Attend #1:  S0287 – Jacket for Multidimensional Scaling in Genomics – This is a great opportunity to learn about accelerating MATLAB® on the GPU.  Come learn why thousands of scientists, engineers, and analysts are using Jacket to do more with less coding hassle. (Day: Tuesday, 05/15; Time: 5:30 pm – 5:55 pm; Location: Room K) #2:  S0415 – An Accelerated Weeks Method for Numerical Laplace Transform Inversion – Learn how the researchers have been able to utilize Jacket in MATLAB® to more efficiently and robustly implement the Weeks method. (Day: Wednesday, 05/16; Time: 9:30 …

AccelerEyes Webinar Series

ScottAnnouncements, CUDA, Events, OpenCL 1 Comment

AccelerEyes invites you to participate in series of webinars designed to help you learn more about Jacket for MATLAB® and ArrayFire for C/C++/Fortran/Python, a comprehensive library of GPU-accelerated functions. GPU Programming for Medical Image Segmentation: January 18, 2012 at 3:00 p.m. EST There’s a huge volume of data generated using acquisition modalities like computer tomography (CT), magnetic resonance imaging (MRI), positron emission tomography or nuclear medicine. A common need is to manipulate and transmit this data using compression techniques in as little time as possible. During this webinar we will show Jacket’s superior speed and handling volumes from subscripting to convolutions.  Come and learn how to accelerate common medical imaging applications using an easy, powerful programming library with Jacket for MATLAB®. OpenCL and CUDA Trade-Offs and Comparison: February 15, 2012 at …

Jacket v2.0 Now Available

ScottAnnouncements, OpenCL Leave a Comment

New Multi-GPU functionality , added support for OpenCL devices, and much more… AccelerEyes announces the release of Jacket version 2.0, adding GPU computing capabilities for use with MATLAB®.  Version 2.0 delivers even more speed through a host of new improvements, maximizing GPU device performance and utilization. Notable new features include a multi-GPU interface and support for OpenCL devices. With Jacket v2.0, your M-code is now portable across all major GPU devices, including AMD/ATI, Intel, and NVIDIA chips. Jacket is the premier GPU software plugin for MATLAB®, better than alternative solutions.  It is relied upon by thousands of organizations for rapid prototyping and problem solving across a range of government, manufacturing, energy, media, biomedical, financial, and scientific research applications. Multi-GPU Details: …

AccelerEyes Releases ArrayFire GPU Software

ScottAnnouncements, ArrayFire, C/C++, CUDA, Fortran, OpenCL 1 Comment

A free, fast, and simple GPU library for CUDA and OpenCL devices. AccelerEyes announces the launch of ArrayFire, a freely-available GPU software library supporting CUDA and OpenCL devices. ArrayFire supports C, C++, Fortran, and Python languages on AMD, Intel, and NVIDIA hardware.  Learn more by visiting the ArrayFire product page. “ArrayFire is our best software yet and anyone considering GPU computing can benefit,” says James Malcolm, VP Engineering at AccelerEyes.  “It is fast, simple, GPU-vendor neutral, full of functions, and free for most users.” Thousands of paying customers currently enjoy AccelerEyes’ GPU software products.  With ArrayFire, everyone developing software for GPUs has an opportunity to enjoy these benefits without the upfront expense of a developer license. Reasons to use ArrayFire: …

LAPACK Functions in Jacket (eig, inv, etc.)

John MelonakosCUDA 2 Comments

One of the questions people commonly ask us is: When will Jacket support LAPACK features such as eigenvalue decomposition, matrix inverse, system solvers, etc.? The reason this question is so popular is that people recognize that these kinds of problems are well-suited for the GPU and will end up giving great performance boosts for Jacket users.  We are looking forward to delivering these functions in Jacket. Jacket is currently built on top of CUDA.  For reasons why we like CUDA, see our previous blog post about OpenCL.  While NVIDIA is busy building from CUDA from the ground up, we are busy building Jacket from the top (MATLAB) down.  NVIDIA is working hard to promote and develop LAPACK libraries directly into …

Data-parallelism vs Task-parallelism

John MelonakosCUDA, OpenCL 1 Comment

In order to understand how Jacket works, it is important to understand the difference between data parallelism and task parallelism.  There are many ways to define this, but simply put and in our context: Task parallelism is the simultaneous execution on multiple cores of many different functions across the same or different datasets. Data parallelism (aka SIMD) is the simultaneous execution on multiple cores of the same function across the elements of a dataset. Jacket focuses on exploiting data parallelism or SIMD computations.  The vectorized MATLAB language is especially conducive to good SIMD operations (more so than a non-vectorized language such as C/C++).  And if you’re going to need a vectorized notation to achieve SIMD computation, why not choose the …