This paper presents an approach that combines conventional image processing with deep learning by fusing the features from the individual techniques. We hypothesize that the two techniques, with different error profiles, are synergistic. The conventional image processing arm uses three handcrafted biologically inspired image processing modules and one clinical information module. The image processing modules detect lesion features comparable to clinical dermoscopy information - atypical pigment network, color distribution, and blood vessels. The clinical module includes information submitted to the pathologist - patient age, gender, lesion location, size, and patient history. The deep learning arm utilizes knowledge transfer via a ResNet-50 network that is repurposed to predict the probability of melanoma classification. The classification scores of each individual module from both processing arms are then ensembled utilizing logistic regression to predict an overall melanoma probability. Using cross-validated results of melanoma classification measured by area under the receiver operator characteristic curve (AUC), classification accuracy of 0.94 was obtained for the fusion technique. In comparison, the ResNet-50 deep learning based classifier alone yields an AUC of 0.87 and conventional image processing based classifier yields an AUC of 0.90. Further study of fusion of conventional image processing techniques and deep learning is warranted.
- deep learning
- transfer learning
ASJC Scopus subject areas
- Computer Science Applications
- Electrical and Electronic Engineering
- Health Information Management