Integrating Computational Metrics and Human Judgments with Deep Learning to Enhance Website Aesthetic Evaluation
Keywords:
Aesthetics, Webpage, Computational Methods, Deep Learning, Subjective, Objective, Multi-output RegressionAbstract
The visual aesthetics of web pages are the most important aspects that capture users' attention at first glance. The design's aesthetic plays an important role, for example, when subscribing to the site or purchasing online services. However, developing a design that satisfies all users is a major challenge due to users' different tastes, cultures, and experiences. In this context, computational methods have proven efficient in assessing measurable objective aesthetics but are unable to assess the personal aesthetics the human eye captures. Human judgment can evaluate subjective aesthetics, but it is inconsistent. Deep learning-based solutions often overlook multiple perspectives, focusing mainly on subjective satisfaction. In this paper, we present a deep learning based integrated approach that integrates computational methods and human evaluation to evaluate different aspects of visual aesthetics of web pages. Six objective aesthetic attributes were measured using the computational method to train a baseline model based on multiple regression. The base model was fine-tuned to train a model on three subjective aesthetics evaluated by human designers. Our approach achieved a mean squared error of (0.015, 0.023, 0.035) and excellent correlation with human evaluation (0.931, 0.859, 0.841) for the subjective ratings (rate consistency, rate clarity, and satisfaction). These results demonstrate the effectiveness of our proposed approach in simultaneously automating the visual design of various aspects of webpage aesthetics through multi-output regression.
Downloads
References
Sutcliffe, Designing for user engagement: Aesthetic and attractive user interfaces. Springer Nature, 2022.
M. T. Thielsch, I. Blotenberg, and R. Jaron, “User evaluation of websites: From first impression to recommendation,” Interact. Comput., vol. 26, no. 1, pp. 89–102, 2014.
T. Lavie and N. Tractinsky, “Assessing dimensions of perceived visual aesthetics of web sites,” Int. J. Hum.-Comput. Stud., vol. 60, no. 3, pp. 269–298, 2004.
S. E. Palmer, K. B. Schloss, and J. Sammartino, “Visual aesthetics and human preference,” Annu. Rev. Psychol., vol. 64, pp. 77–107, 2013.
M. Ramezani Nia and S. Shokouhyar, “Analyzing the effects of visual aesthetic of Web pages on users’ responses in online retailing using the VisAWI method,” J. Res. Interact. Mark., vol. 14, no. 4, pp. 357–389, 2020.
L. de S. Lima and C. Gresse von Wangenheim, “Assessing the visual esthetics of user interfaces: A ten-year systematic mapping,” Int. J. Human–Computer Interact., vol. 38, no. 2, pp. 144–164, 2022.
G. Lindgaard, G. Fernandes, C. Dudek, and J. Brown, “Attention web designers: You have 50 milliseconds to make a good first impression!,” Behav. Inf. Technol., vol. 25, no. 2, pp. 115–126, 2006.
J. McCormack and A. Lomas, “Deep learning of individual aesthetics,” Neural Comput. Appl., vol. 33, no. 1, pp. 3–17, 2021.
X. Lu, Z. Lin, H. Jin, J. Yang, and J. Z. Wang, “Rating image aesthetics using deep learning,” IEEE Trans. Multimed., vol. 17, no. 11, pp. 2021–2034, 2015.
Xing, H. Cao, L. Shi, H. Si, and L. Zhao, “AI-driven user aesthetics preference prediction for UI layouts via deep convolutional neural networks,” Cogn. Comput. Syst., vol. 4, no. 3, pp. 250–264, 2022.
J. Zhang, Y. Miao, and J. Yu, “A comprehensive survey on computational aesthetic evaluation of visual art images: Metrics and challenges,” IEEE Access, vol. 9, pp. 77164–77187, 2021.
Delitzas, K. C. Chatzidimitriou, and A. L. Symeonidis, “Calista: A deep learning-based system for understanding and evaluating website aesthetics,” Int. J. Hum.-Comput. Stud., vol. 175, p. 103019, 2023.
Q. Dou, X. S. Zheng, T. Sun, and P.-A. Heng, “Webthetics: quantifying webpage aesthetics with deep learning,” Int. J. Hum.-Comput. Stud., vol. 124, pp. 56–66, 2019.
M. G. Khani, M. R. Mazinani, M. Fayyaz, and M. Hoseini, “A novel approach for website aesthetic evaluation based on convolutional neural networks,” in 2016 Second International Conference on Web Research (ICWR), IEEE, 2016, pp. 48–53.
L. de Souza Lima, C. G. von Wangenheim, O. P. Martins, A. von Wangenheim, J. C. Hauck, and A. F. Borgatto, “A Deep Learning Model for the Assessment of the Visual Aesthetics of Mobile User Interfaces,” J. Braz. Comput. Soc., vol. 30, no. 1, pp. 102–115, 2024.
M. Zen and J. Vanderdonckt, “Towards an evaluation of graphical user interfaces aesthetics based on metrics,” in 2014 IEEE Eighth International Conference on Research Challenges in Information Science (RCIS), IEEE, 2014, pp. 1–12.
M. Xie, S. Feng, Z. Xing, J. Chen, and C. Chen, “UIED: a hybrid tool for GUI element detection,” in Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2020, pp. 1655–1659.
Miniukovich and A. De Angeli, “Computation of interface aesthetics,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015, pp. 1163–1172.
M. Zen, N. Burny, and J. Vanderdonckt, “A Quality Model-based Approach for Measuring User Interface Aesthetics with Grace,” Proc. ACM Hum.-Comput. Interact., vol. 7, no. EICS, pp. 1–47, 2023.
P. Koutsabasis and T. G. Istikopoulou, “Perceived website aesthetics by users and designers: implications for evaluation practice,” Int. J. Technol. Hum. Interact. IJTHI, vol. 9, no. 2, pp. 39–52, 2013.
Pappas, K. Sharma, P. Mikalef, and M. Giannakos, “Visual aesthetics of E-commerce websites: An eye-tracking approach,” 2018.
S. Eisbach, F. Daugs, M. T. Thielsch, M. Böhmer, and G. Hertel, “Predicting rating distributions of Website aesthetics with deep learning for AI-based research,” ACM Trans. Comput.-Hum. Interact., vol. 30, no. 3, pp. 1–28, 2023.
Xing, H. Si, J. Chen, M. Ye, and L. Shi, “Computational model for predicting user aesthetic preference for GUI using DCNNs,” CCF Trans. Pervasive Comput. Interact., vol. 3, pp. 147–169, 2021.
M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, PMLR, 2019, pp. 6105–6114.
X. Chen et al., “Application of EfficientNet‐B0 and GRU‐based deep learning on classifying the colposcopy diagnosis of precancerous cervical lesions,” Cancer Med., vol. 12, no. 7, pp. 8690–8699, Jan. 2023, doi: 10.1002/cam4.5581.
S. Albert et al., “Comparison of Image Normalization Methods for Multi-Site Deep Learning,” Appl. Sci., vol. 13, no. 15, p. 8923, 2023.
Akça and Ö. Ö. Tanrıöver, “A Deep Transfer Learning Based Visual Complexity Evaluation Approach to Mobile User Interfaces,” Trait. SignalTS Trait. Signal, vol. 39, no. 5, pp. 1545–1556, Nov. 2022, doi: 10.18280/ts.390511.
G. Bonett and T. A. Wright, “Sample size requirements for estimating Pearson, Kendall and Spearman correlations,” Psychometrika, vol. 65, pp. 23–28, 2000.
T. D. V. Swinscow, M. J. Campbell, and others, Statistics at square one. Bmj London, 2002.
M. M. Adnan, M. S. M. Rahim, A. Rehman, Z. Mehmood, T. Saba, and R. A. Naqvi, “Automatic image annotation based on deep learning models: a systematic review and future challenges,” IEEE Access, vol. 9, pp. 50253–50264, 2021.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.