On the use of CNNs with patterned stride for medical image analysis

Main Article Content

Oge Marques
Luiz Zaniolo


Keywords : convolutional neural networks, patterned stride, medical image classification, deep learning
Abstract
The use of deep learning techniques for early and accurate medical image diagnosis has grown significantly in recent years, with some encouraging results across many medical specialties, pathologies, and image types. One of the most popular deep neural network architectures is the convolutional neural network (CNN), widely used for medical image classification and segmentation, among other tasks. One of the configuration parameters of a CNN is called stride and it regulates how sparsely the image is sampled during the convolutional process. This paper explores the idea of applying a patterned stride strategy: pixels closer to the center are processed with a smaller stride concentrating the amount of information sampled, and pixels away from the center are processed with larger strides consequently making those areas to be sampled more sparsely. We apply this method to different medical image classification tasks and demonstrate experimentally how the proposed patterned stride mechanism outperforms a baseline solution with the same computational cost (processing and memory). We also discuss the relevance and potential future extensions of the proposed method.

Article Details

How to Cite
Marques, O., & Zaniolo, L. (2021). On the use of CNNs with patterned stride for medical image analysis. Machine Graphics and Vision, 30(1/4), 3–22. https://doi.org/10.22630/MGV.2021.30.1.1
References

N. Chakrabarty. Brain MRI images for brain tumor detection, 2019. https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection. Dataset [accessed Feb 2021].

N. Codella, V. Rotemberg, P. Tschandl, et al. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge. Hosted by the International Skin Imaging Collaboration (ISIC). arXiv preprint, 2019. arXiv: https://arxiv.org/abs/1902.03368.

B. J. Erickson. Magician’s Corner: 2. Optimizing a simple image classifier. Radiology: Artificial Intelligence, 1(5):e190113, 2019. https://doi.org/10.1148/ryai.2019190113. (Crossref)

B. J. Erickson. Magician's Corner: How to start learning about deep learning. Radiology: Artificial Intelligence, 1(4):e190072, 2019. https://doi.org/10.1148/ryai.2019190072. (Crossref)

A. Goldbloom, B. Hamner, J. Moser, et al. Kaggle: Your Machine Learning and Data Science Community. https://www.kaggle.com. [accessed Feb 2021].

C. Guo, Y.-l. Liu, and X. Jiao. Study on the influence of variable stride scale change on image recognition in CNN. Multimedia Tools and Applications, 78(21):30027-30037, 2018. https://doi.org/10.1007/s11042-018-6861-0. (Crossref)

K. He, X. Zhang, S. Ren, et al. Deep residual learning for image recognition. In Proc. IEEE Conf. Computer Vision and Pattern Recognition CVPR 2016, pages 770-778, Las Vegas, NV, USA, 27-30 Jun 2016. https://doi.org/10.1109/CVPR.2016.90. (Crossref)

G. Huang, Z. Liu, L. Van Der Maaten, et al. Densely connected convolutional networks. In Proc. IEEE Conf. Computer Vision and Pattern Recognition CVPR 2017, pages 4700-4708, Honolulu, HI, USA, 21-26 Jul 2017. IEEE. https://doi.org/10.1109/CVPR.2017.243. (Crossref)

H. Kittler, N. C. F. Codella, M. E. Celebi, et al. ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection. https://challenge2018.isic-archive.com.

A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in neural information processing systems, volume 25, pages 1097-1105. Curran Associates, Inc., 2012. https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf.

Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436-444, 2015. https://doi.org/10.1038/nature14539. (Crossref)

G. Litjens, T. Kooi, B. E. Bejnordi, et al. A survey on deep learning in medical image analysis. Medical Image Analysis, 42:60-88, 2017. https://doi.org/10.1016/j.media.2017.07.005. (Crossref)

O. Marques, L. M. Mayron, G. B. Borba, and H. R. Gamba. Using visual attention to extract regions of interest in the context of image retrieval. In R. Menezes, editor, Proc. 44st Ann. Southeast Regional Conf., pages 638-643, Melbourne, FL, USA, 10-12 Mar 2006. ACM. https://doi.org/10.1145/1185448.1185588. (Crossref)

K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint, 2014. arXiv: https://arxiv.org/abs/1409.1556.

P. Tschandl, C. Rosendahl, and H. Kittler. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific Data, 5:180161, 2018. https://doi.org/10.1038/sdata.2018.161. (Crossref)

L. Zaniolo and O. Marques. On the use of variable stride in convolutional neural networks. Multimedia Tools and Applications, 79:13581–13598, 2020. https://doi.org/10.1007/s11042-019-08385-4. (Crossref)

Statistics

Downloads

Download data is not yet available.
Recommend Articles