Contribute Media
A thank you to everyone who makes this possible: Read More

Lightning Talk: Adding Backends for TorchInductor: Case Study with Intel GPU

Description

  • There are two integration levels to add a new backend for the PyTorch compiler - AtenIR/PrimsIR level and Inductor loop IR level. The ATen/Prim level IR integration has been there via the custom backend registration infrastructure (https://pytorch.org/docs/stable/dynamo/custom-backends.html). Yet, the latter offers an option to integrate backend compiler at the lower loop-level IR, which can benefit from the existing compiler infrastructure of the Inductor, such as the loop fusion and memory planning. We developed a dynamic registration mechanism on the Inductor side for a new backend. The mechanism allows a backend to register its codegen for a particular device at runtime. And the new backend just needs to focus on generating optimal code for the device. - Case Study – Intel GPU Backend for Inductor Take Intel GPU Backend for Inductor as an example to study how to support Intel GPU via the proposed registration mechanism to prove the idea. Intel GPU Backend for Inductor is on top of Triton, as we have enabled Triton to support any new HW backend. In this context, the case study will show the power of “Inductor + Triton” to easily support any new accelerator.
Improve this page