Description
The need for speed remains important for scientific computing. Historically, computers were limited to few dozens of processors, but with modern GPUs, we can have thousands, or even millions of cores running in parallel on distributed systems.
However, developing software for distributed GPU systems can be difficult, both because writing GPU code can be challenging for non-experts, and because distributed systems are inherently complex. We can work to address these challenges by using GPU-enabled libraries that mimic parts of the SciPy ecosystem, such as CuPy, RAPIDS, and Numba, abstracting GPU programming complexity, combined with Dask to abstract distributed computing complexity.
We talk about how Dask has come a long way to support distributed GPU-enabled systems by leveraging community standards and protocols, reusing open source libraries for GPU computing, and keeping it simple and complication-free to build highly-configurable accelerated distributed software.
Dask has evolved over the last year to leverage multi-GPU computing alongside its existing CPU support. We present how this is possible with the use of NumPy-like libraries and how to get started writing distributed GPU software.