https://mgv.sggw.edu.pl/issue/feed Machine Graphics & Vision 2026-04-16T17:45:08+00:00 Editorial Office mgv@sggw.edu.pl Open Journal Systems <p><strong><em>Machine GRAPHICS &amp; VISION</em></strong> is a refereed international <a href="https://mgv.sggw.edu.pl/open-access">open-acess</a> journal, published quarterly by the <a href="https://iit.sggw.edu.pl/?lang=en" target="_blank" rel="noopener">Institute of Information Technology</a> of the <a href="https://www.sggw.edu.pl/en/" target="_blank" rel="noopener">Warsaw University of Life Sciences</a> – <a href="https://www.sggw.edu.pl/en/" target="_blank" rel="noopener">SGGW</a>, in cooperation with the <a href="https://tpo.org.pl/" target="_blank" rel="noopener">Association for Image Processing</a>, Poland – <a href="https://tpo.org.pl/" target="_blank" rel="noopener">TPO</a>.</p> <p><strong><em>MG&amp;V</em></strong> has been published since 1992.</p> <p><strong><em>Machine GRAPHICS &amp; VISION</em></strong> provides a scientific exchange forum and an authoritative source of information in the field of, in general, pictorial information exchange between computers and their environment, including applications of visual and graphical computer systems (<a href="https://czasopisma.sggw.edu.pl/index.php/mgv/about">more</a>).</p> https://mgv.sggw.edu.pl/article/view/10952 Application of computer vision technology in the recognition of Guzheng playing posture 2026-04-16T17:45:08+00:00 Dan Lu ludan_vip@outlook.com <p>This study addresses the performance teaching needs of traditional Chinese Guzheng and attempts to introduce computer vision and deep learning technologies into gesture recognition tasks. By constructing a dataset that includes various Guzheng playing actions, image sequences are collected during the performance process. Combined with convolutional neural networks for feature extraction, this approach achieves automatic recognition of multiple basic gestures. The model employs an optimized ResNet50 structure, maintaining high recognition accuracy under standardized image input and weighted classifiers. Experiments show that the system performs stably in recognizing typical actions and has a certain tolerance for complex action transitions and partial hand occlusions. When deployed in educational settings, the system can provide real-time feedback and visual presentations, assisting teachers in evaluating students' gesture standards and enhancing interactive teaching effects. From the perspective of engineering implementation and practicality in education, this research provides methodological support for the integration of traditional arts and artificial intelligence, laying the groundwork for future intelligent musical instrument training systems. Overall results indicate that this technical approach holds practical significance and application potential in improving Guzheng performance quality and reducing teaching costs.</p> 2026-03-31T00:00:00+00:00 Copyright (c) 2026 Machine Graphics & Vision https://mgv.sggw.edu.pl/article/view/10720 Deep learning for semantic segmentation of linear infrastructure from UAV imagery using NVIDIA Jetson AGX Orin 2026-04-13T19:18:05+00:00 Justyna S. Stypułkowska justyna.stypulkowska@ilot.lukasiewicz.gov.pl <p>A method for semantic segmentation of RGB images captured by UAVs to detect railway infrastructure elements, including tracks, level crossings, and surrounding vegetation is proposed. The study was conducted at the Łukasiewicz Research Network - Institute of Aviation, where a proprietary, manually annotated UAV RGB dataset was created. Five deep neural network architectures were trained and compared: DeepLabV3+, Feature Pyramid Network (FPN), LinkNet, Pyramid Attention Network (PAN) and X-Unet. These models were chosen for their distinct approaches to semantic segmentation and feature processing. Training was performed on a desktop computer with an NVIDIA GeForce RTX 3080 GPU and tests were made also on an NVIDIA Jetson AGX Orin to assess deployment feasibility under real-time conditions. Experimental results confirm the strong performance of the analyzed models in segmenting railway tracks and surrounding vegetation. FPN achieved the highest scores, followed by X-Unet, DeepLabV3+, LinkNet, and PAN. All models operated reliably on the NVIDIA Jetson AGX Orin edge platform. The proposed solution can support remote monitoring of railway infrastructure and vegetation. It can also be adapted to other applications by adjusting the training dataset and object categories. This research demonstrates the potential of deep learning as a powerful tool for analyzing UAV RGB imagery in engineering and environmental contexts.</p> 2026-03-31T00:00:00+00:00 Copyright (c) 2026 Machine Graphics & Vision https://mgv.sggw.edu.pl/article/view/10497 Intelligent extraction and layout optimization of digital media visual elements based on computer vision 2026-03-10T16:12:14+00:00 Hebin Wu WuHebin1989@163.com <p>In the field of digital media, intelligent extraction and layout optimization of visual elements face challenges such as inaccurate semantic understanding of elements and low efficiency in generating layout strategies. This study proposes an extraction and layout optimization model that integrates visual semantic understanding with intelligent optimization strategies, based on a segmentation Vision Transformer and Multi-Objective Firefly Algorithm. The model also utilizes the improved optical flow methods to efficiently capture dynamic information during the design process. Experimental results show that the segmentation Vision Transformer algorithm achieves an extraction accuracy of 98.8±0.2% for different categories of visual elements. As the training progresses to 50 iterations, the average Intersection-Over-Union stabilizes at 0.95, and the harmonic mean of recall reaches 98.17±0.38\%. The evaluation of the integrated model shows that it achieves 99% accuracy in extracting visually similar elements. After layout optimization using the model, the aesthetic score increases to 95.6, and the spatial occupancy rate improves to 97.2%. The above results indicate that the model proposed by the research institute can effectively enhance the accuracy of visual element extraction and the quality of layout optimization, significantly reducing the reliance of traditional methods on manual rules, and providing an efficient and adaptive solution for the automated design of digital media.</p> 2026-02-21T00:00:00+00:00 Copyright (c) 2026 Machine Graphics & Vision https://mgv.sggw.edu.pl/article/view/10508 Enhancing cultural heritage digitalization through 3D graphics algorithm and immersive visual communication technology 2026-02-21T15:25:55+00:00 Fang Yuan yuanfang16316@163.com <p>With the continuous advancement of digital technology, cultural and creative product design is shifting from static presentation to dynamic immersive experience. The research aims to address the challenges faced by traditional modeling methods in accurately restoring complex textures and cross platform visual communication. The neural radiation field algorithm was enhanced by introducing a multi-level cost volume fusion module and a Gaussian uniform mixture sampling strategy. Furthermore, a collaborative visual communication framework integrating augmented reality and virtual reality was constructed, achieving a transition from single image input to high-precision 3D reconstruction, and then to dynamic interaction. The experiment showed that the improved algorithm achieved peak signal-to-noise ratios of 30.63 and 30.15 on the UoM-Culture3D and Bootstrap 3D synthetic datasets, respectively, with structural similarity indices of 0.88 and 0.89, respectively. Field deployment tests have shown that integrating AR and VR technologies into visual communication strategies significantly improves spatial perception consistency, prolongs user engagement time, and enhances detail recognition accuracy. This research emphasizes the potential of combining deeply coupled 3D graphics algorithms with immersive technology, which can help improve the digital restoration accuracy and cultural dissemination efficiency of cultural and creative products, thereby supporting the modern inheritance of traditional culture.</p> 2026-02-16T00:00:00+00:00 Copyright (c) 2026 Machine Graphics & Vision