Course Contents
This course covers the foundations of programming massively parallel processors. It is focused on the architecture of modern graphics hardware and its use for non-graphics applications. This year's course will be tought as "Integrierte Lehrveranstaltung" (integrated event, see details below). The course will be taught in English.
Literature
[list]
[*]David Kirk, Wen-mei Hwu:
Programming Massively Parallel Processors: A Hands-on Approach, Morgan Kaufmann
[*]Hubert Nguyen:
GPU Gems 3, Addison Wesley
[*]T. Mattson, B. Sanders, B. Massingill:
Patterns for Parallel Programming, Addison Wesley
[*]gpgpu.org - General-Purpose Computation Using Graphics Hardware
[*]NVIDIA CUDA page
- NVIDIA GPU Computing Developer Home Page
- NVIDIA CUDA Documents
[*]Krste Asanovic, Ras Bodik, Bryan Christopher Catanzaro, Joseph James Gebis, Parry Husbands, Kurt Keutzer, David A. Patterson, William Lester Plishker, John Shalf, Samuel Webb Williams and Katherine A. Yelick:
The Landscape of Parallel Computing Research: A View from Berkeley, Technical Report No. UCB/EECS-2006-183, University of California at Berkeley (web site)
[/list]
Additional Literature will be announced during the lecture.
Preconditions
[list]
[*]solid programming experience in C/C
[*]basic algorithms and data structures
[/list]
Official Course Description
Graphic cards have traditionally been used (or misused) for applications besides rendering. Applications could benefit from the massive compute power and the special purpose hardware available on the graphics cards. This required, however, mapping the algorithm on the individual stages of the rendering pipeline in an often tedious process. Recent architectural changes remove many of these restrictions. Newly available programming tools allow for high-level, straightforward programming of these systems. In general, massively parallel processors will play a more and more important role in future.This course will consist of two main components:
A theoretical and practical introduction into the topic of programming massively parallel processors (e.g., including algorithms, architectural aspects).
The exercises will be practical programming exercises and a final project based on NVIDIAs CUDA framework. Topics for the final project will be proposed and co-advised by researchers from different fields. Final projects will be group projects, typically with 2 students per group.
Note that this course is an "Integrierte Lehrveranstaltung" which does not have distinct slots for lectures and exercises. During the first part of the semester, we will typically teach two lectures per week whereas the focus will later shift towards project work. Note that the projects are an integral part of the course. It is not possible to get credit for the course without completing a project.
As an integrated course, students will have the possibility to choose among different projects. Topics will come from computational engineering, computer vision, bioinformatics, etc. to show the diversity of problems that can be tackled by massively parallel approaches. Each project will have two supervisors, one from the specific field and one for CUDA related questions. The goal is to learn massively parallel programming not from toy examples but from real problems.
[b]Student Projects:[/b]
As an integrated course, students will have the possibility to choose among different projects. Topics will come from computational engineering, computer vision, bioinformatics, etc. to show the diversity of problems that can be tackled by massively parallel approaches. Each project will have two supervisors, one from the specific field and one for CUDA related questions. The goal is to learn massively parallel programming not from toy examples but from real problems.
This course covers the foundations of programming massively parallel processors. It is focused on the architecture of modern graphics hardware and its use for non-graphics applications. This year's course will be tought as "Integrierte Lehrveranstaltung" (integrated event, see details below). The course will be taught in English.
Literature
[list]
[*]David Kirk, Wen-mei Hwu:
Programming Massively Parallel Processors: A Hands-on Approach, Morgan Kaufmann
[*]Hubert Nguyen:
GPU Gems 3, Addison Wesley
[*]T. Mattson, B. Sanders, B. Massingill:
Patterns for Parallel Programming, Addison Wesley
[*]gpgpu.org - General-Purpose Computation Using Graphics Hardware
[*]NVIDIA CUDA page
- NVIDIA GPU Computing Developer Home Page
- NVIDIA CUDA Documents
[*]Krste Asanovic, Ras Bodik, Bryan Christopher Catanzaro, Joseph James Gebis, Parry Husbands, Kurt Keutzer, David A. Patterson, William Lester Plishker, John Shalf, Samuel Webb Williams and Katherine A. Yelick:
The Landscape of Parallel Computing Research: A View from Berkeley, Technical Report No. UCB/EECS-2006-183, University of California at Berkeley (web site)
[/list]
Additional Literature will be announced during the lecture.
Preconditions
[list]
[*]solid programming experience in C/C
[*]basic algorithms and data structures
[/list]
Official Course Description
Graphic cards have traditionally been used (or misused) for applications besides rendering. Applications could benefit from the massive compute power and the special purpose hardware available on the graphics cards. This required, however, mapping the algorithm on the individual stages of the rendering pipeline in an often tedious process. Recent architectural changes remove many of these restrictions. Newly available programming tools allow for high-level, straightforward programming of these systems. In general, massively parallel processors will play a more and more important role in future.This course will consist of two main components:
A theoretical and practical introduction into the topic of programming massively parallel processors (e.g., including algorithms, architectural aspects).
The exercises will be practical programming exercises and a final project based on NVIDIAs CUDA framework. Topics for the final project will be proposed and co-advised by researchers from different fields. Final projects will be group projects, typically with 2 students per group.
Note that this course is an "Integrierte Lehrveranstaltung" which does not have distinct slots for lectures and exercises. During the first part of the semester, we will typically teach two lectures per week whereas the focus will later shift towards project work. Note that the projects are an integral part of the course. It is not possible to get credit for the course without completing a project.
As an integrated course, students will have the possibility to choose among different projects. Topics will come from computational engineering, computer vision, bioinformatics, etc. to show the diversity of problems that can be tackled by massively parallel approaches. Each project will have two supervisors, one from the specific field and one for CUDA related questions. The goal is to learn massively parallel programming not from toy examples but from real problems.
[b]Student Projects:[/b]
As an integrated course, students will have the possibility to choose among different projects. Topics will come from computational engineering, computer vision, bioinformatics, etc. to show the diversity of problems that can be tackled by massively parallel approaches. Each project will have two supervisors, one from the specific field and one for CUDA related questions. The goal is to learn massively parallel programming not from toy examples but from real problems.
- Lehrende: von BülowMaximilian
Semester: WT 2021/22