CUDA and OpenCL Training
We provide high-quality 2- or 4-day CUDA™ and OpenCL training courses.
Since we solely specialize in CUDA and OpenCL work, we are uniquely able to immerse students in the art of GPU and heterogeneous computing. Students of our courses walk away proficient at programming CUDA or OpenCL, receive the latest industry knowledge and techniques for GPU computing, and learn the tricks to maximize performance from heterogeneous computing devices.
For groups, we either travel to your location, host in our Atlanta office, or train remotely via video conference, tailoring our instruction to meet your application-specific needs. For individuals, we offer 2-day training on a quarterly basis.
We recommend that attendees have working knowledge of C/C++ for a fruitful learning experience.
"Can't ask for better individualized instruction than the environment I was fortunate enough to encounter. The instructor was able to completely focus on my particular needs and concerns."
Included in All Courses
You provide the minds, and we'll take care of the rest. Each training comes with the following:
- Instruction by an excellent and interesting expert
- Hands-on exercises
- Use of a laptop with CUDA and OpenCL capable GPUs and CPUs
- Choice of Linux or Windows operating system
- Printed manual of lecture material
- Electronic copy of programming exercises
CUDA and OpenCL Training Syllabus
* Courses are taught in either CUDA or OpenCL. Similar principles apply in each framework.
Day 1, Introduction
- GPU Computing Overview
- The Programming Model
- Basic Dataset Mapping
- Libraries, ArrayFire
- Profiling Tools
- A Simple Kernel
- Equivalent ArrayFire Example
- Using Libraries
- Monte Carlo Pi Estimation
- Timing and ArrayFire
- Debugging Code
Day 2, Optimization
- Architecture: Grids, Blocks, and Threads
- Memory Model: Global, Shared, and Constant Memory
- Advanced Mapping Techniques
- Streams: Asynchronos Launches and Concurrent Execution
- ArrayFire: Lazy Evaluation and Code Vectorization
- Matrix Transpose
- Optimization Using Shared Memory
- Median Filter
- Optimization Using Constant Memory
- Stream Example
- ArrayFire Example: Nearest Neighbor Algorithm
Day 3, Multi-GPU
- Multi-GPU Use Cases
- Multi-GPUs: Contexts
- Existing Libraries
- Scaling Across Multiple GPUs
- Out of Core Problems: Matrix Multiply
- Task Level Parallelism: Optimization
- ArrayFire Multi-GPU
Day 4, Algorithm Problems
Lectures and Practice (customizable):
- Scan Algorithms
- Customer-Specific Problem
Xilinx SDAccel Training
In addition to CUDA & OpenCL training, we offer training for Xilinx SDAccel. ArrayFire is the exclusive Xilinx SDAccel Authorized Training Partner (ATP) for North America. Our SDAccel training courses help enable design teams to leverage Xilinx FPGAs for OpenCL application acceleration.
Course Name: "Developing and Optimizing Applications Using the OpenCL Framework for FPGAs"
(or email us at firstname.lastname@example.org with any questions)
Individual 2-Day Course
For single individuals, we provide an online 2-day CUDA-only training course once a quarter following Days 1 & 2 syllabus for CUDA training shown above.
- Q2: June 21-22, 2022
- Q3: September 27-28, 2022
- Q4: December 13-14, 2022
- Q1: March 28-29, 2023
There are limited number of spots in each course, so reserve your spot as soon as possible.