GPU Giddy – Excitement Building for GTC

John MelonakosCUDA, Events Leave a Comment

GTC is coming up…

The GPU Technology Conference (GTC) starts later this month and is sure to generate a new level of excitement and energy around GPU computing.  The conference includes over 250 technology sessions presented by industry, government, and academic technology leaders.  AccelerEyes is pleased to be well represented at this year’s conference by our technical leadership and a number of our customers.  If you plan to attend the conference be sure to include the sessions outlined below on your agenda.

In addition to being well represented, we are also flattered to see that others in the market have recognized that GPU Computing with MATLAB delivers clear productivity gains and that the performance improvements made possible by GPUs is a reality today.  Most notably, The MathWorks will share its vision and capabilities for GPU Computing with MATLAB during the conference, which should increase the visibility and demand for the technology worldwide.  We encourage everyone to attend the session to learn about their new offering.

AccelerEyes will be demoing Jacket at Table #56 and hope that you will stop by to see the latest and greatest Jacket technology during the conference.

Jacketized GTC Sessions

2132 – Accelerating Biologically Inspired Computer Vision Models

Join us for a discussion on applying commodity-server-based clusters and GPU-based clusters to simulating computer vision algorithms at a scale that approaches that of biological vision. We consider the limitations of each technology, survey approaches taken thus far, and suggest new hybrid models and programming frameworks to overcome current limitations and substantially improve performance.

Speaker: Tom Dean, Google Inc.
Topic: Computer Vision, Machine Learning & Artificial Intelligence
Time: Tuesday, September, 21st, 11:00 – 11:50

2268 – Think Data-Parallel! Building Data-Parallel Code with M

Discover and leverage parallelism inherent in pre-existing codes. Often times, parallelism is hidden in seemingly serial programs. This is due obfuscation via indexing or looping wherein the parallelism is seemingly non-existent. Several real-world examples of seemingly serial code demonstrate simple, yet surprisingly effective rules for detecting potential parallelism.

For each example, learn how to express the code at a higher, more concise level in M by vectorizing computations. We give several canned techniques of vectorization for many common, and sometimes very difficult, use cases.

Learn how such vectorization concisely brings the parallelism of code to the forefront and transforms programs that might have been originally difficult to run on a SIMT device very suitable for execution on the GPU. GPU speedups will be shown utilizing Jacket.

Speaker: Gallagher Pryor, AccelerEyes
Topic: General Interest
Time: Tuesday, September, 21st, 15:30 – 15:50

2300 – High-Performance Compressive Sensing using Jacket

This talk will present the ongoing work that I am doing in the L1-optimization group at Rice University. The purpose of the work is to merge both compressive sensing, for image/signal reconstructions and GPU computation, using NVIDIA’s GPUs to enhance the technology of CS.

This talk will cover basic concepts in compressive sensing and the easy adaptation of operating on the GPU, in particular working with Jacket (by AccelerEyes). We will then cover some of our numerical experiments that encompass the use of different flavors of algorithms.

Speaker: Nabor Reyna
Topics: Imaging, Tools & Libraries
Time: Wednesday, September, 22nd, 10:30 – 10:50

2201 – A Case Study of Accelerating Matlab Based Applications using GPUs

Learn how to accelerate Matlab based applications using GPUs. We cover a popular neuro-imaging software called SPM and show how to use CUDA and Jacket to speedup computationally intensive Matlab applications.

Speaker: Aniruddha Dasgupta, Georgia Institute of Technology
Topic: Medical Imaging & Visualization
Time: Wednesday, September, 22nd, 16:00 – 16:50

2271 – Compose CUDA Masterpieces! Write better, Leverage More

Not all CUDA code is created equally. Learn how to step up your CUDA game. Also, learn how to build large, multi-person CUDA projects for your organization.

In very clear descriptions, learn the difference between naïve GPU code, intermediate GPU code, and advanced GPU mastery. We show how careful construction of CUDA kernels can affect application performance.

We also discuss how Jacket tools greatly facilitate the development of CUDA-based projects.

Finally, we will debut the Jacket runtime’s new C/C++ library. With this library, the technical computing functions in Jacket’s MATLAB engine are made available in C/C++.

Speaker: James Malcolm, AccelerEyes
Topic: Tools & Libraries
Time: Thursday, September, 23rd, 16:00 – 16:50

2100 – Hybrid GPU/Multicore Solutions for Large Linear Algebra Problems

Large linear algebra problems may be solved using recursive block decomposition in which GPUs efficiently compute the sub-blocks and multicore CPUs put the sub-blocks back together within a large shared memory space. This talk will present benchmark results for such a hybrid approach, implemented in Matlab® and using Jacket® to access the GPU compute power.

Speaker: Nolan Davis, SAIC
Topics: High Performance Computing, Algorithms & Numerical Techniques, Signal processing
Time: Thursday, September, 23rd, 16:00 – 16:50

Leave a Reply

Your email address will not be published. Required fields are marked *