Machine Graphics & Vision https://mgv.sggw.edu.pl/ <p><strong><em>Machine GRAPHICS &amp; VISION</em></strong> is a refereed international journal, published quarterly by the <a href="https://iit.sggw.edu.pl/?lang=en" target="_blank" rel="noopener">Institute of Information Technology</a> of the <a href="https://www.sggw.edu.pl/en/" target="_blank" rel="noopener">Warsaw University of Life Sciences</a> – <a href="https://www.sggw.edu.pl/en/" target="_blank" rel="noopener">SGGW</a>, in cooperation with the <a href="https://tpo.org.pl/" target="_blank" rel="noopener">Association for Image Processing</a>, Poland – <a href="https://tpo.org.pl/" target="_blank" rel="noopener">TPO</a>.</p> <p><strong><em>MG&amp;V</em></strong> has been published since 1992.</p> <p><strong><em>Machine GRAPHICS &amp; VISION</em></strong> provides a scientific exchange forum and an authoritative source of information in the field of, in general, pictorial information exchange between computers and their environment, including applications of visual and graphical computer systems (<a href="https://czasopisma.sggw.edu.pl/index.php/mgv/about">more</a>).</p> en-US mgv@sggw.edu.pl (Editorial Office) mgv@sggw.edu.pl (Editorial Office) Wed, 05 Nov 2025 00:00:00 +0000 OJS 3.3.0.7 http://blogs.law.harvard.edu/tech/rss 60 Skin lesion segmentation using SegNet with spatial attention https://mgv.sggw.edu.pl/article/view/10239 <p>Skin lesion segmentation identifies and outlines the boundaries of abnormal skin regions. Accurate segmentation may help in the early detection of skin cancer. Accurate Skin Lesion Segmentation is still challenging due to different skin color tones, variations in shape, and body hairs. Moreover, variability in the lesion appearance, quality of the images, and lack of clear skin boundaries make the problem even harder. This paper proposes a SegNet model with spatial attention mechanisms for skin lesion segmentation. Adding one component of spatial attention to SegNet allows the proposed model to focus more on specific parts across the image, eventually leading to a better segmentation of the lesion boundary. The proposed model was evaluated on the ISIC 2018 dataset. Our proposed model attained an average accuracy of 96.25%, and the average dice coefficient equals 0.9052. The model's performance indicates its possible application in automated skin disease diagnosis.</p> Maryam Arif, Almas Abbasi, Muhammad Arif, Muhammad Rashid Copyright (c) 2025 Machine Graphics & Vision https://mgv.sggw.edu.pl/article/view/10239 Wed, 05 Nov 2025 00:00:00 +0000 Perceptually optimised Swin-Unet for low-light image enhancement https://mgv.sggw.edu.pl/article/view/10482 <p>In this paper we propose a novel approach to low-light image enhancement using a transformer-based Swin-Unet and a perceptually driven loss that incorporates Learned Perceptual Image Patch Similarity (LPIPS), a deep-feature distance aligned with human visual judgements. Specifically, our U-shaped Swin-Unet applies shifted-window self-attention across scales with skip connections and multi-scale fusion, mapping a low-light RGB image to its enhanced version in one pass. Training uses a compact objective - Smooth-L₁, LPIPS (AlexNet), MS-SSIM (detached), inverted PSNR, channel-wise colour consistency, and Sobel-gradient terms - with a small LPIPS weight chosen via ablation. Our work addresses the limits of purely pixel-wise losses by integrating perceptual and structural components to produce visually superior results. Experiments on LOL-v1, LOL-v2, and SID show that while our Swin-Unet does not surpass current state-of-the-art on standard metrics, the LPIPS-based loss significantly improves perceptual quality and visual fidelity. These results confirm the viability of transformer-based U-Net architectures for low-light enhancement, particularly in resource-constrained settings, and suggest exploring larger variants and further tuning of loss parameters in future work.</p> Tomasz M. Lehmann, Przemysław Rokita Copyright (c) 2025 Machine Graphics & Vision https://mgv.sggw.edu.pl/article/view/10482 Wed, 12 Nov 2025 00:00:00 +0000 Adaptation art image style transfer by integrating CSDA-FD algorithm and OSDA-DS algorithm https://mgv.sggw.edu.pl/article/view/10457 <p>Traditional domain adaptation learning methods have a strong dependence on data labels. The transfer process can easily lead to a decrease in training set performance, affecting the effectiveness of transfer learning. Therefore, this study proposes a domain adaptation model that combines feature disentangling and disentangling subspaces. The model separates the content and style features of images through disentangling, effectively improving the quality of image transfer. From the results, the proposed feature disentangling algorithm achieved pixel accuracy of over 84% for semantic segmentation of 14 categories, including roads, sidewalks, and buildings, with an average pixel accuracy of 85.2%. On the ImageNet, the precision, recall, F₁ score, and overall accuracy of the research algorithm were 0.942, 0.898, 0.854, and 0.841, respectively. Compared with the One-Class Support Vector Machine, the precision, recall, F₁, and overall accuracy were improved by 8.4%, 10.3%, 27.8%, and 10.9%, respectively. The proposed model can accurately recognize and classify images, providing effective technical support for image transfer.</p> Peng Wang Copyright (c) 2025 Machine Graphics & Vision https://mgv.sggw.edu.pl/article/view/10457 Thu, 04 Dec 2025 00:00:00 +0000 A method for generating advertising design images based on hierarchical features and simulated annealing algorithm https://mgv.sggw.edu.pl/article/view/10506 <p>With the development of intelligent design and computer-aided design technology, advertising image generation has gradually received attention and over 70% of digital advertisers regard automated creative generation as a key direction for improving efficiency and precision delivery. To address the shortcomings of existing advertising design methods in feature extraction and optimization efficiency, a novel advertising design image generation method combining hierarchical feature extraction and simulated annealing algorithm optimization is proposed. Research is based on a hierarchical feature model to extract multi-scale semantic information from advertising images, and optimize layout through simulated annealing algorithm to improve the visual consistency of design images. The experiment outcomes show that the raised model has the highest mean fitness, especially in the first set of hyperparameter settings, with mean fitness values of 3.00 and 2.95 on the training and testing sets, respectively. Meanwhile, the standard deviation and coefficient of variation are significantly lower than for other algorithms, with minimal fluctuations and the strongest robustness. In addition, among the three types of advertising images for product promotion, brand promotion, and directive sign advertisement, the generated advertising images have significant advantages in visual clarity, perceptual quality, and other aspects. As shown in the directive sign advertisement, the mean square error, peak signal-to-noise ratio, structural similarity, and learning perceptual image patch similarity of this model are 0.025, 66.97, 0.67, and 0.10, respectively, which are significantly better than the other two comparison methods. The research results indicate that the raised model is suitable for scenarios that require high-precision image generation, providing an effective solution for intelligent advertising generation.</p> Jian Zhang Copyright (c) 2025 Machine Graphics & Vision https://mgv.sggw.edu.pl/article/view/10506 Mon, 08 Dec 2025 00:00:00 +0000