Contribute Media
A thank you to everyone who makes this possible: Read More

Optimize your network inference time with OpenVINO


Speaker:: Adrian Boguszewski

Track: PyData: Deep Learning During the talk, I'll present the OpenVINO™ Toolkit. You'll learn how to automatically convert the model using Model Optimizer and how to run the inference with OpenVINO Runtime to infer your model with low latency on the CPU and iGPU you already have. The magic with only a few lines of code.

Recorded at the PyConDE & PyData Berlin 2022 conference, April 11-13 2022. More details at the conference page: Twitter: Twitter:


Improve this page