Bringing Together the GPU Computing Ecosystem for Python

John MelonakosAnnouncements, ArrayFire, Computing Trends, CUDA, Open Source, Python Leave a Comment

To date, we have not done a lot for the Python ecosystem. A few months ago, we decided it was time to change that. Like NVIDIA said in this post, the current slate of GPU tools available to Python developers is scattered. With some attention to community building, perhaps we can build something better — together. NVIDIA spoke some about its plans to help cleanup the ecosystem. We’re onboard with that mentality and have two ways we propose to contribute: We’re working on a survey paper that assesses the state of the ecosystem. What technical computing things can you do with each package? What benchmarks result from the packages on real Python user code? What plans does each group have …

ArrayFire v3.8 Release

John MelonakosAnnouncements, ArrayFire Leave a Comment

We are excited to share the v3.8 release of ArrayFire! ArrayFire is used in commercial, academic, and government projects around the world, solving some of the toughest computing problems in the most innovative projects. It is well-tested and amazingly fast! In this post, we share some of the major features added to ArrayFire in its 3.8 feature release. The binaries and source code can be downloaded from these locations: Official installers GitHub repository Official APT repository Starting with this release, we will provide Ubuntu packages form our APT repository. To install our packages add our apt repository with the below commands. At this moment we are only supporting bionic(18.04) and focal(20.04). apt-key adv –fetch-key https://repo.arrayfire.com/GPG-PUB-KEY-ARRAYFIRE-2020.PUB echo “deb [arch=amd64] https://repo.arrayfire.com/ubuntu $(lsb_release …

The Roaring 20s in AI & Technical Computing

John MelonakosArrayFire, Computing Trends, Open Source Leave a Comment

Since ArrayFire was founded in 2007, there has been an explosion in software and its importance to our lives. Computers, connected to sensors and real-world outcomes, do really cool things that touch nearly every aspect of our lives. I believe these are exciting times for technical computing and for HPC, as evidenced by the things showcased this week at SC 2020. While ArrayFire focuses purely on software, our hardware partners turn our imaginative lines of code into real-world applications. AMD, NVIDIA, and Intel have each evolved tremendously since we started ArrayFire. Over a decade ago, NVIDIA and its CEO-founder Jensen saw the opportunity to teach the world a new heterogeneous model of computing that overwhelmingly convinces scientists, engineers, and analysts …

I am AI at NVIDIA & ArrayFire

John MelonakosAI, ArrayFire, Events Leave a Comment

I am an explorer.I am a helper.I am a healer.I am a visionary.I am a builder.I am even the narrator of the story you are watching.And the composer of the music.I am AI. These words are from the first 3:11 min of Jensen’s keynote today. Yesterday was another amazing NVIDIA GTC kicking off. Fully remote due to coronavirus, I still enjoyed the content without the travel. I encourage you to watch the video below. A masterpiece. ArrayFire has participated and exhibited at every in-person NVIDIA GTC, ever since the 2008 NVIDIA NVISION conference, click the link for a nice flashback. NVIDIA has come a long way from that 0:37 clip and the 3:11 clip Jensen showed above. At NVISION, we …

ArrayFire v3.7.x Release

Stefan YurkevitchAnnouncements, ArrayFire Leave a Comment

With the release of the 3.7.2 patch release, we wanted to discuss some of the major features added to ArrayFire. The binaries have been available for a few weeks but we wanted to discuss the changes here. It can be downloaded from these locations: Official installers GitHub repository This version of ArrayFire is better than ever! We have added many new features that expand the capabilities of ArrayFire while improving its performance and flexibility. Some of the new features include: 16-bit floating point support Neural network compatible convolution and gradient functions Reduce-by-key Confidence Connected Components Array padding functions Support for sparse-sparse arithmetic operations Pseudo-inverse, meanvar(), rqsrt() and much more! We have also spent a significant amount of effort exposing the …

ArrayFire v3.6.4

Umar ArshadArrayFire 1 Comment

We are proud to announce another exciting release of the ArrayFire library. This version fixes a critical performance regression in the ArrayFire just in time kernel generation code. We discovered the regression late in the release window for the v3.6.3 release and we couldn’t address it in the previous version so this version only consists of 2 commits. Please go check out the https://arrayfire.com/download page for the latest installers.

ArrayFire v3.6.2 Release

Stefan YurkevitchArrayFire Leave a Comment

We are excited to announce ArrayFire v3.6.2! In this release we have fixed a number of bugs, improved documentation, and added a few features while improving performance as always. We highlight some of the exciting changes below. New features and improvements Several features added in v3.6.2 are concerned with batching and broadcasting. In v3.6 we introduced batched matmul() allowing you to multiply several matrices at the same time. You could only batch arrays that were the same shape and size on the left hand side and the right hand size. In cases when you wanted to multiply one matrix with multiple matrices, you had to tile the inputs so that they were the same shape before you performed the multiplication. …

ArrayFire v3.6 Release

Umar ArshadAnnouncements, ArrayFire 3 Comments

Today we are pleased to announce the release of ArrayFire v3.6.  It can be downloaded from these locations: Official installers GitHub repository This latest version of ArrayFire is better than ever! We added several new features that improve the performance and usability of the ArrayFire library. The main features are: Support for batched matrix multiply Added the topk function Added the anisotropic diffusion filter We have also spent a significant amount of effort improving the internals of the library. The build system is significantly improved and organized. Batched Matrix Multiplication The new batch matmul allows you to perform several matrix multiplication operations in one call of matmul. You might want to call this function if you are performing multiple smaller matrix multiplication operations. Here …

ArrayFire v3.5.1 Release

Miguel LloredaAnnouncements, ArrayFire 1 Comment

We are excited to announce ArrayFire v3.5.1! This release focuses on fixing bugs and improving performance. Here are the improvements we think are most important: Performance improvements We’ve improved element-wise operation performance for the CPU backend. The af::regions() function has been modified to leverage texture memory, improving its performance. Our JIT engine has been further optimized to boost performance. Bug fixes We’ve squashed a long standing bug in the CUDA backend responsible for breaking whenever the second, third, or fourth dimensions were large enough to exceed limits imposed by the CUDA runtime. The previous implementation of af::mean() suffered from overflows when the summation of the values lied outside the range of the backing data type. New kernels for each of …

ArrayFire v3.5 Official Release

Umar ArshadAnnouncements, ArrayFire, CUDA, Open Source, OpenCL 1 Comment

Today we are pleased to announce the release of ArrayFire v3.5, our open source library of parallel computing functions supporting CUDA, OpenCL, and CPU devices. This new version of ArrayFire improves features and performance for applications in machine learning, computer vision, signal processing, statistics, finance, and more. This release focuses on thread-safety, support for simple sparse-dense arithmetic operations, canny edge detector function, and a genetic algorithm example. A complete list of ArrayFire v3.5 updates and new features are found in the product Release Notes. Thread Safety ArrayFire now supports threading programming models. This is not intended to improve the performance since most of the parallelism is happening on the device, but it does allow you to use multiple devices in …