Enhancing cultural heritage digitalization through 3D graphics algorithm and immersive visual communication technology

Main Article Content

Fang Yuan


Keywords : 3D graphics algorithm, visual communication technology, cultural and creative product design, NeRF, VR, AR
Abstract

With the continuous advancement of digital technology, cultural and creative product design is shifting from static presentation to dynamic immersive experience. The research aims to address the challenges faced by traditional modeling methods in accurately restoring complex textures and cross platform visual communication. The neural radiation field algorithm was enhanced by introducing a multi-level cost volume fusion module and a Gaussian uniform mixture sampling strategy. Furthermore, a collaborative visual communication framework integrating augmented reality and virtual reality was constructed, achieving a transition from single image input to high-precision 3D reconstruction, and then to dynamic interaction. The experiment showed that the improved algorithm achieved peak signal-to-noise ratios of 30.63 and 30.15 on the UoM-Culture3D and Bootstrap 3D synthetic datasets, respectively, with structural similarity indices of 0.88 and 0.89, respectively. Field deployment tests have shown that integrating AR and VR technologies into visual communication strategies significantly improves spatial perception consistency, prolongs user engagement time, and enhances detail recognition accuracy. This research emphasizes the potential of combining deeply coupled 3D graphics algorithms with immersive technology, which can help improve the digital restoration accuracy and cultural dissemination efficiency of cultural and creative products, thereby supporting the modern inheritance of traditional culture.

Article Details

How to Cite
Yuan, F. (2026). Enhancing cultural heritage digitalization through 3D graphics algorithm and immersive visual communication technology. Machine Graphics & Vision, 35(1), 3–23. https://doi.org/10.22630/MGV.2026.35.1.1
References

L. Baker, J. Ventura, T. Langlotz, S. Gul, S. Mills, et al. Localization and tracking of stationary users for augmented reality. The Visual Computer 40(1):227-244, 2024. https://doi.org/10.1007/s00371-023-02777-2.

J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, et al. Mip-NeRF: A multiscale representation for anti-aliasing neural radiance fields. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5835-5844, 2021. https://doi.org/10.1109/ICCV48922.2021.00580.

J. Bast. Managing the image. The visual communication strategy of European right-wing populist politicians on Instagram. Journal of Political Marketing 23(1):1-25, 2024. https://doi.org/10.1080/15377857.2021.1892901.

J.-J. Cao, S.-M. Fang, and H. Contreras. Multimodal fusion visual communication method based on genetic algorithm. Journal of Network Intelligence 10(2):1071-1083, 2025. https://bit.kuas.edu.tw/ jni/2025/vol10/s2/34.JNI-S-2024-05-019.pdf.

J. Fang and X. Gong. Application of visual communication in digital animation advertising design using convolutional neural networks and big data. Peerj Computer Science 9:e1383, 2023. https://doi.org/10.7717/peerj-cs.1383.

FREEPIK. Find icons that go together. Fast. https://www.freepik.com/icons.

Y. Ge, B. Guo, P. Zha, S. Jiang, Z. Jiang, et al. 3D reconstruction of ancient buildings using UAV images and neural radiation field with depth supervision. Remote Sensing 16(3):473, 2024. https://doi.org/10.3390/rs16030473.

M. A. Guerroudji, K. Amara, M. Lichouri, N. Zenati, and M. Masmoudi. A 3D visualization-based augmented reality application for brain tumor segmentation. Computer Animation and Virtual Worlds 35(1):e2223, JAN 2024. https://doi.org/10.1002/cav.2223.

A. Houdard, A. Leclaire, N. Papadakis, and J. Rabin. A generative model for texture synthesis based on optimal transport between feature distributions. Journal of Mathematical Imaging and Vision 65(1):4-28, 2023. https://doi.org/10.1007/s10851-022-01108-9.

Z. Jia, B. Wang, and C. Chen. Drone-nerf: Efficient nerf based 3D scene reconstruction for large-scale drone survey. Image and Vision Computing 143:104920, 2024. https://doi.org/10.1016/j.imavis.2024.104920.

X. Liao, X. Wei, M. Zhou, and S. Kwong. Full-reference image quality assessment: Addressing content misalignment issue by comparing order statistics of deep features. IEEE Transactions on Broadcasting 70(1):305-315, 2023. https://doi.org/10.1109/TBC.2023.3294835.

J. Lin, G. Sharma, and T. N. Pappas. Toward universal texture synthesis by combining texton broadcasting with noise injection in StyleGAN-2. e-Prime - Advances in Electrical Engineering, Electronics and Energy 3:100092, 2023. https://doi.org/10.1016/j.prime.2022.100092.

F. Liu, B. Lin, and K. Meng. Design and realization of rural environment art construction of cultural image and visual communication. International Journal of Environmental Research and Public Health 20(5):4001, 2023. https://doi.org/10.3390/ijerph20054001.

W. Liu, Y. Zang, Z. Xiong, X. Bian, C. Wen, et al. 3D building model generation from MLS point cloud and 3D mesh using multi-source data fusion. International Journal of Applied Earth Observation and Geoinformation 116:103171, 2023. https://doi.org/10.1016/j.jag.2022.103171.

G. Mazzacca, A. Karami, S. Rigon, E. Farella, P. Trybala, et al. Nerf for heritage 3D reconstruction. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 48(M-2-2023):1051-1058, 2023. https://doi.org/10.5194/isprs-archives-XLVIII-M-2-2023-1051-2023.

B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, et al. NeRF: representing scenes as neural radiance fields for view synthesis. Communications of the ACM 65(1):99–106, 2021. https://doi.org/10.1145/3503250.

T. Müller, A. Evans, C. Schied, and A. Keller. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics 41(4):102, 2022. https://doi.org/10.1145/3528223.3530127.

M. Pepe, V. S. Alfio, and D. Costantino. Assessment of 3D model for photogrammetric purposes using AI tools based on NeRF algorithm. Heritage 6(8):5719-5731, 2023. https://doi.org/10.3390/heritage6080301.

S. Qiu, S. Wang, X. Chen, F. Qian, and Y. Xiao. Ship shape reconstruction for three-dimensional situational awareness of smart ships based on neural radiation field. Engineering Applications of Artificial Intelligence 136:108858, 2024. https://doi.org/10.1016/j.engappai.2024.108858.

F. Sattler, B. Carrillo-Perez, S. Barnes, K. Stebner, M. Stephan, et al. Embedded 3D reconstruction of dynamic objects in real time for maritime situational awareness pictures. The Visual Computer 40(2):571-584, 2024. https://doi.org/10.1007/s00371-023-02802-4.

S. Shen, S. Xing, X. Sang, B. Yan, and Y. Chen. Virtual stereo content rendering technology review for light-field display. Displays 76:102320, 2023. https://doi.org/10.1016/j.displa.2022.102320.

X. Shi and R. Villegas. AI technology in the virtual reality environment of graphic design of dynamic art visual communication frame. Journal of Computational Methods in Sciences and Engineering 25(3):2603-2616, 2025. https://doi.org/10.1177/14727978251321333.

Z. Sun. BS-Objaverse. Hugging Face. https://huggingface.co/datasets/Zery/BS-Objaverse/.

Z. Sun, T. Wu, P. Zhang, Y. Zang, X. Dong, et al. Bootstrap3D: Improving multi-view diffusion model with synthetic data. arXiv, arXiv:2406.00093v2, 2024. https://doi.org/10.48550/arXiv.2406.00093.

Xinyi_Zheng. CULTURE3D: Cultural Landmarks and Terrain Dataset for 3D Applications. GitHub. https://github.com/X-Intelligence-Labs/CULTURE3D.

V. O. Yachnaya, V. R. Lutsiv, and R. O. Malashin. Modern automatic recognition technologies for visual communication tools. Computer Optics 47(2):287-305, 2023. https://doi.org/10.18287/2412-6179-CO-1154.

C. Yan, B. Gong, Y. Wei, and Y. Gao. Deep multi-view enhancement hashing for image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence 43(4):1445-1451, 2020. https://doi.org/10.1109/TPAMI.2020.2975798.

J.-W. Yang, J.-M. Sun, Y.-L. Yang, J. Yang, Y. Shan, et al. DMiT: Deformable Mipmapped Tri-plane representation for dynamic scenes. In: Computer Vision - ECCV 2024, pp. 436-453. Springer Nature Switzerland, Cham, 2025. https://doi.org/10.1007/978-3-031-73001-6_25.

J. You and X. Lu. Visual communication design based on machine vision and digital media communication technology. KSII Transactions on Internet & Information Systems 19(6):1888-1907, 2025. https://doi.org/10.3837/tiis.2025.06.007.

S. H. Yudhanto, F. Risdianto, and A. T. Artanto. Cultural and communication approaches in the design of visual communication design works. Journal of Linguistics, Culture and Communication 1(1):79-90, 2023. https://doi.org/10.61320/jolcc.v1i1.79-90.

Z. Zhang, L. Li, G. Cong, H. Yin, Y. Gao, et al. From speaker to dubber: Movie dubbing with prosody and duration consistency learning. In: Proceedings of the 32nd ACM International Conference on Multimedia, pp. 7523-7532, 2024. https://doi.org/10.1145/3664647.3680777.

M. Zhao. Application of image reconstruction algorithm combining FCN and Pix2Pix in visual communication design. Journal of Computational Methods in Sciences and Engineering 25(4):3137-3151, 2025. https://doi.org/10.1177/14727978251319398.

Statistics

Downloads

Download data is not yet available.
Recommend Articles