Improving Retinal Blood Vessel Segmentation Accuracy with Hybrid Attention-Based CNNs
Keywords:
Attention mechanisms, Convolutional Neural Networks (CNNs), Deep learning, Diabetic retinopathy, Hypertensive retinopathy, Image segmentation, Retinal blood vessel segmentationAbstract
Accurate retinal blood vessel segmentation is crucial for diagnosing and monitoring various ocular and systemic diseases. While convolutional neural networks (CNNs) have shown potential in this area, their performance is often hindered by the intricate and subtle structures of retinal vasculature. This paper introduces a hybrid attention-based CNN architecture designed to overcome these challenges and improve segmentation accuracy. The model incorporates both spatial and channel attention mechanisms within a U-Net framework, enabling it to focus on the most relevant features in retinal images. By integrating attention gates and Squeeze-and-Excitation (SE) blocks, the network is better equipped to detect fine and complex blood vessels while reducing interference from irrelevant background information. Experimental evaluations on two public datasets—STARE, and DRIVE—demonstrate that the proposed method outperforms both attention-based and non-attention-based architectures, achieving state-of-the-art results. Specifically, the model attains Accuracy scores of 0.9876 and 0.9797 on the respective datasets . These results highlight the potential of the proposed approach in enhancing the accuracy and robustness of retinal blood vessel segmentation, making it a promising tool for clinical applications.
Downloads
References
Alvarado-Carrillo, D. E., & Dalmau-Cedeño, O. S. (2022). Width attention based convolutional neural network for retinal vessel segmentation. Expert Systems with Applications, 209, 118313.
Noh, K. J., Park, S. J., & Lee, S. (2019). Scale-space approximated convolutional neural networks for retinal vessel segmentation. Computer methods and programs in biomedicine, 178, 237-246.
Ortiz-Feregrino, R., Tovar-Arriaga, S., Pedraza-Ortega, J. C., & Rodriguez-Resendiz, J. (2023). Segmentation of Retinal Blood Vessels Using Focal Attention Convolution Blocks in a UNET. Technologies, 11(4), 97.
Vaswani, A. (2017). Attention is all you need. arXiv preprint arXiv:1706.03762.
Yang, J., Li, C., Zhang, P., Dai, X., Xiao, B., Yuan, L., & Gao, J. (2021). Focal self-attention for local-global interactions in vision transformers. arXiv preprint arXiv:2107.00641.
Zhang, S., Fu, H., Yan, Y., Zhang, Y., Wu, Q., Yang, M., ... & Xu, Y. (2019). Attention guided network for retinal image segmentation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part I 22 (pp. 797-805). Springer International Publishing.
Guo, C., Szemenyei, M., Yi, Y., Wang, W., Chen, B., & Fan, C. (2021, January). Sa-unet: Spatial attention u-net for retinal vessel segmentation. In 2020 25th international conference on pattern recognition (ICPR) (pp. 1236-1242). IEEE.
Zhang Yu-Bin Yang, Q. L. (2021). SA-Net: Shuffle Attention for Deep Convolutional Neural Networks. arXiv e-prints, arXiv-2102.
Crimi, A., & Bakas, S. (Eds.). (2021). Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part I (Vol. 12658). Springer Nature.
Oktay, O., Schlemper, J., Folgoc, L. L., Lee, M., Heinrich, M., Misawa, K., ... &Rueckert, D. (2018). Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
Huang, G., Chen, D., Li, T., Wu, F., Van Der Maaten, L., & Weinberger, K. Q. (2017). Multi-scale dense convolutional networks for efficient prediction. arXiv preprint arXiv:1703.09844, 2(2).
Alom, M. Z., Yakopcic, C., Nasrin, M. S., Taha, T. M., &Asari, V. K. (2019). Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. Journal of digital imaging, 32, 605-617.
Su, R., Zhang, D., Liu, J., & Cheng, C. (2021). Msu-net: Multi-scale u-net for 2d medical image segmentation. Frontiers in Genetics, 12, 639930.
Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., &Ronneberger, O. (2016). 3D U-Net: learning dense volumetric segmentation from sparse annotation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19 (pp. 424-432). Springer International Publishing.
Li, Z., Zhang, H., Li, Z., & Ren, Z. (2022). Residual-attention unet++: a nested residual-attention u-net for medical image segmentation. Applied Sciences, 12(14), 7149.
Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV) (pp. 3-19).
Li, X., Jiang, Y., Li, M., & Yin, S. (2020). Lightweight attention convolutional neural network for retinal vessel image segmentation. IEEE Transactions on Industrial Informatics, 17(3), 1958-1967.
Yildirim, Z., Samet, R., Hancer, E., Nemati, N., & Mali, M. T. (2023, October). Gland Segmentation in H&E Histopathological Images using U-Net with Attention Module. In 2023 Twelfth International Conference on Image Processing Theory, Tools and Applications (IPTA) (pp. 1-6). IEEE.
Frangi, A. F., Schnabel, J. A., Davatzikos, C., Alberola-López, C., & Fichtinger, G. (Eds.). (2018). Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part IV (Vol. 11073). Springer.
Pop, M., Sermesant, M., Zhao, J., Li, S., McLeod, K., Young, A., ... & Mansi, T. (Eds.). (2019). Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges: 9th International Workshop, STACOM 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers (Vol. 11395). Springer.
Li, C., Li, Z., & Liu, W. (2023). TDCAU-Net: retinal vessel segmentation using transformer dilated convolutional attention-based U-Net method. Physics in Medicine & Biology, 69(1), 015003.
Wang, F., Jiang, Y., Gao, J., & Liu, W. (2020). Efficient BFCN for Automatic Retinal Vessel Segmentation (vol 2020, 6439407, 2020). JOURNAL OF OPHTHALMOLOGY, 2020. [9]
Li, X., Jiang, Y., Li, M., & Yin, S. (2020). Lightweight attention convolutional neural network for retinal vessel image segmentation. IEEE Transactions on Industrial Informatics, 17(3), 1958-1967.
Gao, J., Huang, Q., Gao, Z., & Chen, S. (2022). Image Segmentation of Retinal Blood Vessels Based on Dual‐Attention Multiscale Feature Fusion. Computational and Mathematical Methods in Medicine, 2022(1), 8111883.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.


