Triangle Counting in Graphs on the GPU (Part 1)

Oded GreenArrayFire 3 Comments

Triangle counting is a building block for many social analytics, especially for the widely used clustering coefficient analytic. Clustering coefficients is used for finding key players in a network based on their local connectivity, which is expressed based on the number of triangles that they belong to. There are many ways for finding the triangles a vertex may belong to. One of the most popular approaches for finding the triangles is to do an intersection of two adjacency lists. If a pair of vertices (u,v) have any common neighbors, these will be found in the intersection process.

While the above may sound simple, implementing an efficient intersection on the GPU is not trivial or straightforward, it requires smart partitioning of the work to make full use of the thousands of available threads that modern GPUs have. We just finished such an implementation that has multiple levels of parallelism which partitions the work to the multiprocessors and to the SIMT units.

For testing purposes, we used graphs taken from the DIMACS 10 Graph Challenge. Our experiments were conducted on NVIDIA’s K40 GPU. Our GPU triangle counting implementation achieves speedups in the range of 9X-32X over a CPU sequential implementation. These can be seen in the figure below.

 

 

gpu

 

Comments 3

  1. Pingback: a building block for many social analytics

  2. Pingback: Triangle Counting in Graphs on the GPU (Part 2) | ArrayFire

  3. Pingback: Triangle Counting in Graphs on the GPU (Part 3) | ArrayFire

Leave a Reply

Your email address will not be published. Required fields are marked *