Description
AI research is more important now than ever. Trust in AI is critical, but it’s hard to build trust without metrics and documentation. How can we make documentation as easy as possible in order to maintain trust in the results from our research? Is there a way to organize our models so that we can ensure reproducibility? How can we save ourselves precious development time by automating parts of the metric tracking process?
In this talk, we’ll give a brief introduction to a popular metric tracking tool, MLFlow, before going into a deep dive on three lesser known features that can enhance collaboration, increase transparency and reduce the time wasted reproducing results.
The three features that we’ll talk about are autologging, MLFlow system tags and the MLFlow model registry. We’ll see how using these three features can save you tons of time that would have otherwise been wasted writing lines of code, looking for old code or finding the right model version.
By the end of this talk, you’ll have all the knowledge you need to successfully use MLFlow to your best advantage. You’ll be able to automatically log every parameter and metric according to your framework of choice, link the version of code to the metrics that version produced for faster reproducibility and have a process that you can reliably use to write helpful documentation quickly.