ATiTHi: Deep Learning and Hybrid Optimization for Accurate Tourist Destination Classification
Keywords:
Content-based image classification, Tourist destination exploration, Convolutional Neural Networks (CNNs), Transfer learningAbstract
This research introduces an innovative approach to tourist destination exploration through content-based image classification, leveraging convolutional neural networks (CNNs). Recognizing the pivotal role of visual content in understanding tourism preferences and marketing destinations, the study focused on India. A dataset, named Indian Trajectory, was curated, comprising six thousand images categorized into six major tourist destination classes. Transfer learning strategies, utilizing pretrained weights from ImageNet, were employed to address the challenge of limited dataset size. Six prominent CNN models VGG-16, VGG-19, MobileNetV2, InceptionV3, ResNet-50, and AlexNet were initialized with pretrained weights and adapted classifiers for tourist image classification. Hyperparameter optimization, through a hybrid approach, enhanced the efficiency of the proposed Atithi model. Performance comparison indicated that VGG-16 outperformed other models, achieving an accuracy of 98. This result surpassed AlexNet (84.12), MobileNetV2 (96.97), VGG-19 (93.99), InceptionV3 (91.79), and ResNet-50 (87.08). Overall, the study demonstrates the potential of CNNs and transfer learning in automating the analysis of tourist photos for a more satisfying and market-oriented tourism experience.
Downloads
References
H. Kim and S. Stepchenkova, “Effect of tourist photographs on attitudes towards destination: Manifest and latent content,” Tour. Manag., vol. 49, pp. 29–41, 2015, doi: 10.1016/j.tourman.2015.02.004.
J. Li, L. Xu, L. Tang, S. Wang, and L. Li, “Big data in tourism research: A literature review,” Tour. Manag., vol. 68, pp. 301–323, 2018, doi: 10.1016/j.tourman.2018.03.009.
H. Li, J. Wang, M. Tang, and X. Li, “Polarization-dependent effects of an Airy beam due to the spin-orbit coupling,” J. Opt. Soc. Am. A Opt. Image Sci. Vis., vol. 34, no. 7, pp. 1114–1118, 2017, doi: 10.1002/ecs2.1832.
N. D. Hoang and V. D. Tran, “Image Processing-Based Detection of Pipe Corrosion Using Texture Analysis and Metaheuristic-Optimized Machine Learning Approach,” Comput. Intell. Neurosci., vol. 2019, 2019, doi: 10.1155/2019/8097213.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 -Conf. Track Proc., pp. 1–14, 2015.
D. Kim, Y. Kang, Y. Park, N. Kim, and J. Lee, “Understanding tourists’ urban images with geotagged photos using convolutional neural networks,” Spat. Inf. Res., vol. 28, no. 2, pp. 241–255, 2020, doi: 10.1007/s41324-019-00285-x.
J. C. Cepeda-Pacheco and M. C. Domingo, “Deep learning and Internet of Things for tourist attraction recommendations in smart cities,” Neural Comput. Appl., vol. 34, no. 10, pp. 7691–7709, 2022, doi: 10.1007/s00521-021-06872-0.
B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 Million Image Database for Scene Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 6, pp. 1452–1464, 2018, doi: 10.1109/TPAMI.2017.2723009.
F. Sheng, Y. Zhang, C. Shi, M. Qiu, and S. Yao, “Xi’an tourism destination image analysis via deep learning,” J. Ambient Intell. Humaniz. Comput., vol. 13, no. 11, pp. 5093–5102, 2022, doi: 10.1007/s12652-020-02344-w.
M. Figueredo et al., “From photos to travel itinerary: A tourism recommender system for smart tourism destination,” Proc. -IEEE 4th Int. Conf. Big Data Comput. Serv. Appl. BigDataService 2018, pp. 85–92, 2018, doi: 10.1109/BigDataService.2018.00021.
Ajani, S. N. ., Khobragade, P. ., Dhone, M. ., Ganguly, B. ., Shelke, N. ., & Parati, N. . (2023). Advancements in Computing: Emerging Trends in Computational Science with Next-Generation Computing. International Journal of Intelligent Systems and Applications in Engineering, 12(7s), 546–559
A. Derdouri and T. Osaragi, A machine learning-based approach for classifying tourists and locals using geotagged photos: the case of Tokyo, vol. 23, no. 4. Springer Berlin Heidelberg, 2021.
Y. C. Chen, K. M. Yu, T. H. Kao, and H. L. Hsieh, “Deep learning based real-time tourist spots detection and recognition mechanism,” Sci. Prog., vol. 104, no. 3_suppl, pp. 1–19, 2021, doi: 10.1177/00368504211044228.
R. Wang, J. Luo, and S. (Sam) Huang, “Developing an artificial intelligence framework for online destination image photos identification,” J. Destin. Mark. Manag., vol. 18, no. August, p. 100512, 2020, doi: 10.1016/j.jdmm.2020.100512.
Y. Li, X. Li, and C. Yue, “Recognition of Tourist Attractions,” pp. 1–5, 2016.
K. Zhang, Y. Chen, and C. Li, “Discovering the tourists’ behaviors and perceptions in a tourism destination by analyzing photos’ visual content with a computer deep learning model: The case of Beijing,” Tour. Manag., vol. 75, no. May, pp. 595–608, 2019, doi: 10.1016/j.tourman.2019.07.002.
M. Chen, D. Arribas-Bel, and A. Singleton, “Quantifying the characteristics of the local urban environment through geotagged flickr photographs and image recognition,” ISPRS Int. J. Geo-Information, vol. 9, no. 4, 2020, doi: 10.3390/ijgi9040264.
P. Roy, J. H. Setu, A. N. Binti, F. Y. Koly, and N. Jahan, “Tourist Spot Recognition Using Machine Learning Algorithms,” Lect. Notes Data Eng. Commun. Technol., vol. 131, no. January, pp. 99–110, 2023, doi: 10.1007/978-981-19-1844-5_9.
C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, 2019, doi: 10.1186/s40537-019-0197-0.[22]M. Abadi et al., “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems,” 2016
J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” J. Mach. Learn. Res., vol. 13, pp. 281–305, 2012.
H. Jiang and E. Learned-Miller, “Face Detection with the Faster R-CNN,” Proc. -12th IEEE Int. Conf. Autom. Face Gesture Recognition, FG 2017 -1st Int. Work. Adapt. Shot Learn. Gesture Underst. Prod. ASL4GUP 2017, Biometrics Wild, Bwild 2017, Heterog. Face Recognition, HFR 2017, Jt. Chall. Domin. Complement. Emot. Recognit. Using Micro Emot. Featur. Head-Pose Estim. DCER HPE 2017 3rd Facial Expr. Recognit. Anal. Challenge, FERA 2017, pp. 650–657, 2017, doi: 10.1109/FG.2017.82.
D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” 3rd Int. Conf. Learn. Represent. ICLR 2015 -Conf. Track Proc., pp. 1–15, 2015.
P. R. Chandre, P. N. Mahalle, and G. R. Shinde, “Machine Learning Based Novel Approach for Intrusion Detection and Prevention System: A Tool Based Verification,” in 2018 IEEE Global Conference on Wireless Computing and Networking (GCWCN), Nov. 2018, pp. 135–140, doi: 10.1109/GCWCN.2018.8668618.
P. R. Chandre, “Intrusion Prevention Framework for WSN using Deep CNN,” vol. 12, no. 6, pp. 3567–3572, 2021.
P. Chandre, P. Mahalle, and G. Shinde, “Intrusion prevention system using convolutional neural network for wireless sensor network,” IAES Int. J. Artif. Intell., vol. 11, no. 2, pp. 504–515, 2022, doi: 10.11591/ijai.v11.i2.pp504-515.
R. Kumari, A. Nigam, and S. Pushkar, “Machine learning technique for early detection of Alzheimer’s disease,” Microsyst. Technol., vol. 26, no. 12, pp. 3935–3944, 2020, doi: 10.1007/s00542-020-04888-5
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.