• Mars 2020

    • Vendredi 20 09:00 - 17:00
CentraleSupélec, campus Paris-Saclay
Bâtiment Eiffel, 8-10 rue Joliot-Curie - 91190 Gif-sur-Yvette
  • Mars 2020

    • Vendredi 20 09:00 - 17:00
CentraleSupélec, campus Paris-Saclay
Bâtiment Eiffel, 8-10 rue Joliot-Curie - 91190 Gif-sur-Yvette

Lectures and Training on Fundamentals of Accelerated Computing with CUDA Python - NVIDIA, CentraleSupélec, Moulon Mésocentre

NVIDIA, CentraleSupélec (MICS and the GPU Research Center), Moulon Mésocentre organize the Lectures and Training on Fundamentals of Accelerated Computing with CUDA Python

NVIDIA, CentraleSupélec (MICS and the GPU Research Center), Moulon Mésocentre organize the Lectures and Training on Fundamentals of Accelerated Computing with CUDA Python

This lectures and training teach you the fundamental tools and techniques for running GPU-accelerated Python applications using CUDA® and the NUMBA compiler GPUs. You’ll work though dozens of hands-on coding exercises, and at the end of the training, implement a new workflow to accelerate a fully functional linear algebra program originally designed for CPUs, observing impressive performance gains.

Date: Friday, 20th March 2020

Duration: 8 hours (from 9:00 to 17:00)

Material: During the workshop, each participant will have dedicated access to a fully configured, GPU-accelerated workstation in the cloud. Each participant should come with its own notebook (PC) with a wifi connection.

Assessment type: Code-based

Prerequisites: Basic Python competency, including familiarity with variable types, loops, conditional statements, functions, and array manipulations. NumPy competency, including the use of ndarrays and ufuncs. No previous knowledge of CUDA programming is required.

Languages: English

Tools, libraries, and frameworks: Numba, NumPy

Learning Objectives : At the end of the day, you’ll have an understanding of the fundamental tools and techniques for GPU-accelerated Python applications with CUDA and Numba:

  • GPU-accelerate NumPy ufuncs with a few lines of code.
  • Configure code parallelization using the CUDA thread hierarchy.
  • Write custom CUDA device kernels for maximum performance and flexibility.
  • Use memory coalescing and on-device shared memory to increase CUDA kernel bandwidth.
  • Generate random numbers on the GPU.
  • Learn intermediate GPU memory management techniques.

Instructors: Instructors will be instructors from Nvidia.

Registrations : To ensure suitable interraction between the instructors and the participants, the number of participants for this first session is limited to 20 participants. Be sure to be present all day long before registering. FIFO algorithm will be applied for registrations.

People should register by sending an email to Frédéric Magoulès (frederic.magoules@centralesupelec.fr).

Contacts: Frédéric Magoulès (CentraleSupélec).

icône prixSur inscription
icône horaireVendredi 20 mars 2020, 09h00
icône lieuCentraleSupélec, campus Paris-Saclay, Bâtiment Eiffel, 8-10 rue Joliot-Curie - 91190 Gif-sur-Yvette