Topology Optimization with Accessibility Constraint for Multi-Axis Machining

John MelonakosCase Studies, Computer Vision Leave a Comment

Researchers from the Palo Alto Research Center (PARC) credit ArrayFire in a paper published in the Journal of Computer-Aided Design. The paper is titled “Topology Optimization with Accessibility Constraint for Multi-Axis Machining” and showcases ArrayFire accelerating the workload. Summary In this post, a topology optimization (TO) framework is presented to enable the automated design of mechanical components while ensuring the result can be manufactured using multi-axis machining. Although TO improves the part’s performance, the as-designed model is often geometrically too complex to be machined, and the as-manufactured model can significantly vary due to machining constraints that are not accounted for during TO. In other words, many of the optimized design features cannot be accessed by a machine tool without colliding with the …

Synthetic Aperture Radar on the Jetson TX1

John MelonakosArrayFire, Case Studies, Computer Vision, CUDA, Image Processing 1 Comment

Researchers at Peter the Great St. Petersburg Polytechnic University have implemented a synthetic aperture radar processing on the Jetson TX1 Platform using ArrayFire as described in this paper. The paper introduces SAR as “a remote sensing technique producing high-resolution radar images of the Earth’s surface. SAR technology allows obtaining wide swath radar images of objects at a considerable distance regardless of the weather and lighting conditions. It can be used by unmanned aerial vehicles and space satellites. Thus, SAR technology allows solving various problems, such as: detecting small objects (vehicles, airplanes, ships), assessing the state of railways, airfields, seaports, mapping an area, assisting in geological exploration, mapping vegetation, detecting oil spills and pollution as well as many other tasks.” The …

Finger Vein Identity Recognition in “Negligible Time” using ArrayFire

John MelonakosArrayFire, Case Studies, Computer Vision, CUDA, Open Source Leave a Comment

In this blog post, we summarize work by researchers in Slovakia using ArrayFire to develop OpenFinger, a finger vein identity recognition library. Finger prints and finger veins can be used as a biometric for identity recognition. The physical setup of their sensor system is the following collection of CMOS sensors scattering light to a near infrared LED that projects the image to a CCD camera for capture to a computer. The computing infrastructure used in this work consists of the following components. Several great open source libraries are used, including OpenCV, Caffe, Qt, and ArrayFire. ArrayFire is specifically used in pre-processing to accelerate Gabor filtering on the GPU. Gabor filter has proven itself as one of the most suitable techniques …

GTC 2015 ArrayFire Recordings

Aaron TaylorArrayFire, Computer Vision, CUDA

Missed visiting ArrayFire at GTC this year? We’ve got you covered! You can now check out the recordings of all our GTC 2015 talks and tutorials at your own convenience. Learn about accelerating your code from the best in the business. Talks Real-Time and High Resolution Feature Tracking and Object Recognition Peter Andreas Entschev This session will cover real-time feature tracking and object recognition in high resolution videos using GPUs and productive software libraries including ArrayFire. Feature tracking and object recognition are computer vision problems that have challenged researchers for decades. Over the last 15 years, numerous approaches were proposed to solve these problems, some of the most important being SIFT, SURF and ORB. Traditionally, these approaches are so computationally …

Computer Vision in ArrayFire – Part 2: Feature Description and Matching

Peter EntschevArrayFire, Benchmarks, Computer Vision 2 Comments

In the Part 1 of this series, we talked about upcoming feature detection algorithms in ArrayFire library. In this post we show case some of the preliminary results of Feature Description and matching that are under development in the ArrayFire library. Feature description is done using the ORB feature descriptor[1]. The descriptors are matched against a database of features using Hamming distance as the metric. The results we show in this blog use the same hardware and software used in the previous blog: Intel Sandy Bridge Xeon processor with 32 cores (for baseline OpenCV CPU implementation) NVIDIA Tesla K20C (for OpenCV and ArrayFire CUDA implementations) ArrayFire development version OpenCV version 2.4.9 Feature Description and Matching Benchmarks In Part 1 we showed that …

Computer Vision in ArrayFire – Part 1: Feature Detection

Peter EntschevArrayFire, Computer Vision 6 Comments

A few weeks ago we wrote Writing a Simple Corner Detector with ArrayFire. In that post, we discussed a little bit about the new features that we are working on for ArrayFire. Some of these new computer vision features will be available in the next release of ArrayFire. For the next release, ArrayFire will have a complete set to start with feature tracking, including FAST for feature detection [1], ORB for description [2] and a Hamming distance matcher. We will also include a dedicated version of the Harris corner detector [3], even though it can be written using existing ArrayFire functions. This implementation is straightforward, easy to use and will have better performance. For this post, we will share some …

Writing a Simple Corner Detector with ArrayFire

Peter EntschevArrayFire, C/C++, Computer Vision 2 Comments

In the upcoming months we’ll be adding a lot of new Computer Vision functionality to ArrayFire, specifically targeting the most commonly used applications in this field. New functions include feature tracking, object classification, scene segmentation, optical flow, and stereo-vision. Feature tracking consists of three basic steps: Detecting good or unique features; normally they are corners or blobs of an object. Extracting a descriptor for each feature—understanding the texture of a small patch around each feature. Descriptor matching—finding out the best match for each pair of descriptors (one from the object being tracked, another from a scene that potentially contains that object), if any. Harris corner detector In this article we will be using ArrayFire to dive deeper into the first step of feature …