Description
Transfer learning is a powerful technique to boost the performance of a deep learning model. However, the healthcare industry often has very specific image data sets that are dissimilar to the large-scale data sets used to pretrain the publicly available models. Therefore, are there any benefits of applying transfer learning in healthcare? Come and listen to find out.
The idea of transfer learning is to reuse features learned on a related task to improve the performance of a model on a new task. The advantages of transfer learning are well known: faster training, less labeled data needed and higher accuracy of the final model. Therefore, the use of pretrained models became a de facto standard in many practical applications, among others in computer vision. This is all under the assumption that the features learned on the source task are generic enough to be reused on a target task. However, the healthcare industry often has very specific data sets that are rather dissimilar to the large-scale and publicly available data sets used to pretrain the models. The goal of the presentation is to show if there are any advantages of using pretrained models in such settings.
To find out, we have designed a dedicated experiment in which we compare the performance of various CNN architectures applied to different medical imaging data sets, both public and private. We initialize the models either randomly or with ImageNet pretrained parameters with various settings. We also compare the results to the performance of a small, custom designed CNN networks.
The presentation will have the following outline. First, the audience will be introduced to the transfer learning and its variants. Later, the design of the dedicated computational experiments will be presented, followed by the results and conclusions. The latter will be compared to the conclusions from the latest state of the art papers on the topic.
The participants will have a unique opportunity to learn about the benefits and pitfalls of applying transfer learning to imaging data sets that are much more specific than natural images from ImageNet.
Background knowledge required to understand the presentation: intermediate knowledge about machine learning, deep learning and CNNs.