Description
Is often the case that we build our numerical and data pipelines on very safe and fast libraries that communicate with Python. New types of hardware accelerators such as GPU and the new software stack 2.0 requires breathtaking approaches that were not contemplated in the past. In this talk, we will talk about Julia, the Python-Julia ecosystem, and numerical problems that requires us, to write more close to the hardware and how Julia allows to maintain expressiveness without sacrificing speed, we walk over, programming GPGPUs on Julia, and sample Neural Ordinary Differential Equations and take a fresh look on the Julia Language. We will explore how is the performance obtained on very simple tasks and (if time permits) we will look at the emitted code on the LLVM IR, the CUDA paradigm on Julia, and how Julia can help you to write sample code on foreign architectures such as Google TPUs.