Detecting Anomalies of Large-Scale Light Curves

John MelonakosArrayFire, Case Studies Leave a Comment

Researchers from Tsinghua University, the Chinese Academy of Sciences, and David Bader of the New Jersey Institute of Technology credit ArrayFire in a paper published in the 2020 IEEE High-Performance Extreme Computing Conference (HPEC). The paper is titled “GPU Accelerated Anomaly Detection of Large Scale Light Curves.” In this research, light from 200,000 stars is tracked, looking for events of high-mass dark objects that bend light from the source, indicating the discovery of planets and black holes.


Summary

Microlensing is a unique anomaly that occurs when a lens (or lenses) passes between a light source (star) and an observer (Earth). These lenses are high-mass objects that bend the light from the source. This anomaly is helpful in the detection of “dark” objects. Therefore, it can detect objects that do not emit light. This is useful in detecting new planets and black holes. Furthermore, microlensing is a rare astronomical event, especially on the timescale of hours.

Microlensing depiction, courtesy Wikipedia.

Identifying anomalies in millions of stars in real-time is a great challenge. In this paper, the researchers develop a matched filtering-based algorithm to detect a typical anomaly, microlensing. The algorithm can detect short timescale microlensing events with high accuracy at their early stage with a very low false-positive rate.

Furthermore, a GPU-accelerated scalable computational framework, enabling real-time follow-up observation, is designed using ArrayFire. This framework efficiently divides the algorithm between CPU and GPU, accelerating large-scale light curve processing to meet low latency requirements.

Experimental results show that the proposed method can process 200,000 stars (the maximum number of stars processed by a single GWAC telescope) in approximately 3.34 seconds with current commodity hardware while achieving an accuracy of 92% and an average detection occurring approximately 14% before the peak of the anomaly with zero false alarm. Working with the proposed sharding mechanism, the framework is positioned to be extendable to multiple GPUs to improve the performance further for the higher data throughput requirements of next-generation telescopes.

Results

The researchers developed a system for anomaly detection as follows:

With ArrayFire, the researchers could process 200,000 stars in this system in 3.34 seconds on average, sufficient to benefit astronomical research.

Conclusion

Low latency, high throughput, and scalable parallel algorithms are crucial for accelerating scientific research based on big data. In this research, the authors presented an accurate matched filtering-based microlensing anomaly detection algorithm with a very low false-positive rate. Furthermore, their algorithm can raise an alarm approximately 14% before the peak of a microlensing event on average.


Thanks to these researchers for sharing their great work with us!

Leave a Reply

Your email address will not be published. Required fields are marked *