Machine Graphics and Vision
https://mgv.sggw.edu.pl/
<p><strong><em>Machine GRAPHICS & VISION</em></strong> is a refereed international journal, published quarterly by the <a href="https://iit.sggw.edu.pl/?lang=en" target="_blank" rel="noopener">Institute of Information Technology</a> of the <a href="https://www.sggw.edu.pl/en/" target="_blank" rel="noopener">Warsaw University of Life Sciences</a> – <a href="https://www.sggw.edu.pl/en/" target="_blank" rel="noopener">SGGW</a>, in cooperation with the <a href="https://tpo.org.pl/" target="_blank" rel="noopener">Association for Image Processing</a>, Poland – <a href="https://tpo.org.pl/" target="_blank" rel="noopener">TPO</a>.</p> <p><strong><em>MG&V</em></strong> has been published since 1992.</p> <p><strong><em>Machine GRAPHICS & VISION</em></strong> provides a scientific exchange forum and an authoritative source of information in the field of, in general, pictorial information exchange between computers and their environment, including applications of visual and graphical computer systems (<a href="https://czasopisma.sggw.edu.pl/index.php/mgv/about">more</a>).</p>Szkoła Główna Gospodarstwa Wiejskiego w Warszawieen-USMachine Graphics and Vision1230-0535Brain tumor classification using feature extraction and ensemble learning
https://mgv.sggw.edu.pl/article/view/9835
<p>Brain tumors (BT) are considered the second-principal cause of human death on our planet. They pose significant challenges in the field of medical diagnosis. Early detection is crucial for effective treatment and improved patient outcomes. As a result, researchers’ studies that deal with tumor detection play a vital role in early disease prediction in the field of medicine. Despite advancements in medical imaging technologies, accurate and efficient classification of BT remains a complex task. This study aims to address this challenge by proposing a novel method for brain tumor classification utilizing ensemble learning techniques combined with feature extraction from neuroimaging data. In the present paper, we present a novel approach for brain tumor classification that contains ensemble learning methods following the extraction of important features from brain tumor images. Our methodology involves the preprocessing of neuroimaging data, followed by feature extraction using descriptor techniques. These extracted features are then utilized as inputs to ensemble learning classifiers. Experimental results demonstrate the efficacy of the proposed approach in accurately classifying brain tumors with high precision and recall rates. The ensemble learning framework, combined with feature extraction, outperforms several benchmark models and methods commonly used in brain tumor classification, including AlexNet, VGG-16, and MobileNet, in terms of classification accuracy and computational efficiency. The proposed method that integrates ensemble learning techniques with feature extraction from neuroimaging data offers a promising solution for improving the accuracy and efficiency of brain tumor diagnosis, thereby facilitating timely intervention and treatment planning. The findings of this study contribute to the advancement of medical imaging-based classification systems for brain tumors, with implications for enhancing patient care and clinical decision-making in neuro-oncology.</p>Iliass Zine-dineAnass FahfouhJamal RiffiKhalid El FazazyIsmail El BatteouiMohamed Adnane MahrazHamid Tairi
Copyright (c) 2024 Machine Graphics and Vision
2024-12-272024-12-27333/432810.22630/MGV.2024.33.3.1Classification of maize growth stages using deep neural networks with voting classifier
https://mgv.sggw.edu.pl/article/view/9935
<p>Deep learning significantly supports key tasks in science, engineering, and precision agriculture. In this study, we propose a method for automatically determining maize developmental stages on the BBCH scale (phases 10-19) using RGB and multispectral images, deep neural networks, and a voting classifier. The method was evaluated using RGB images and multispectral data from the MicaSense RedEdge MX-Dual camera, with training conducted on HTC_r50, HTC_r101, HTC_x101, and Mask2Former architectures. The models were trained on RGB images and separately on individual spectral channels from the multispectral camera, and their effectiveness was evaluated based on classification performance. For multispectral images, a voting classifier was employed because the varying perspectives of individual spectral channels made it impossible to align and merge them into a single coherent image. Results indicate that HTC_r50, HTC_r101, and HTC_x101 trained on spectral channels with a voting classifier outperformed their RGB-trained counterparts in precision, recall, and F1-score, while Mask2Former demonstrated higher precision with a voting classifier but achieved better accuracy, recall, and F1-score when trained on RGB images. Mask2Former trained on RGB images yielded the highest accuracy, whereas HTC_r50 trained on spectral channels with a voting classifier achieved superior precision, recall, and F1-score. This approach facilitates automated monitoring of maize growth stages and supports result aggregation for precision agriculture applications. It offers a scalable framework that can be adapted for other crops with appropriate labeled datasets, highlighting the potential of deep learning for crop condition assessment in precision agriculture and beyond.</p>Justyna S. StypułkowskaPrzemysław Rokita
Copyright (c) 2024 Machine Graphics and Vision
2024-12-272024-12-27333/4295310.22630/MGV.2024.33.3.2Selecting update blocks of convolutional neural networks using genetic algorithm in transfer learning
https://mgv.sggw.edu.pl/article/view/9214
<p>The performance of convolutional neural networks (CNN) for computer vision problems depends heavily on their architectures. Transfer learning performance of a CNN strongly relies on selection of its trainable layers. Selecting the most effective update layers for a certain target dataset often requires expert knowledge on CNN architecture which many practitioners do not possess. General users prefer to use an available architecture (e.g. GoogleNet, ResNet, EfficientNet etc.) that is developed by domain experts. With the ever-growing number of layers, it is increasingly becoming difficult and cumbersome to handpick the update layers. Therefore, in this paper we explore the application of a genetic algorithm to mitigate this problem. The convolutional layers of popular pre-trained networks are often grouped into modules that constitute their building blocks. We devise a genetic algorithm to select blocks of layers for updating the parameters. By experimenting with EfficientNetB0 pre-trained on ImageNet and using three popular image datasets - namely Food-101, CIFAR-100 and MangoLeafBD - as target datasets, we show that our algorithm yields similar or better results than the baseline in terms of accuracy, and requires lower training and evaluation time due to learning a smaller number of parameters. We also devise a measure called block importance to measure each block's efficacy as an update block and analyze the importance of the blocks selected by our algorithm.</p>Md. Mehedi HasanMuhammad IbrahimMd. Sawkat Ali
Copyright (c) 2024 Machine Graphics and Vision
2024-12-272024-12-27333/4557010.22630/MGV.2024.33.3.3