Designing Artificial Intelligence (AI) Based Secured Framework to Improvise the Data Security of Confidential Academic Records
Keywords:
Artificial Intelligence (AI), Secured Framework, Data Security, Confidential Academic RecordsAbstract
One of the difficulties associated with conducting analytics using artificial intelligence is trying to maximise utility while also safeguarding human rights and maintaining meaningful human control. In this situation, one of the most important things for policymakers and lawmakers to think about is how much they should let protection be done automatically in a society that is becoming more digital. Such Security-Preserving Technologies have the goal of implementing security-by-design into the back end and front end of digital services from the very beginning of the development process. They watch over the data architectures to make certain that they are safe and sound, as well as ensuring that any data-related dangers are neutralised during the design phase as well as during operation. In this paper, we talk about recent trends in the development of tools and technologies that help make AI security analytics safe and reliable. We also give recommendations based on the research's findings and insights. We also talk about recent trends in the creation of tools and technologies that make AI security analytics safe and reliable. This paper makes a contribution to the discussion by investigating the various technical solutions that have been developed by the projects of the AI-based secured framework. These solutions aim to protect academic records in terms of both their security and their confidentiality.
Downloads
References
Bernard Marr, “How Much Data Do We Create Every Day? The Mind-Blowing Stats Everyone Should Read,” Forbes, May 21, 2018. A quintillion is a 1 followed 30 zeroes.
Cameron F. Kerry and Caitlin Chin, “Hitting refresh on privacy policies: Recommendations for notice and transparency,” The Brookings Institution, January 6, 2020.
De Hert, P., Papakonstantinou, V., Malgieri, G., Beslay, L., & Sanchez, I. (2018). The right to data portability in the GDPR: Towards user-centric interoperability of digital services. Computer Law & Security Review, 34(2), 193–203.
Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 9, 2018
Kerr, O. S. (2012). The Mosaic Theory of the Fourth Amendment. Rev. 311.
Matthias Spielkamp, “Inspecting Algorithms for Bias,” MIT Technology Review, June 12, 2017.
Mojjada, R. K., & Bhattacharyya, D. D. (2016). Data Security And Integrity in Adoption of Cloud Computing. Kaav International Journal of Science, Engineering & Technology, 3(1), 38-49. https://www.kaavpublications.org/abstracts/data-security-and-integrity-in-adoption-of-cloud-computing
Newman, N. Fletcher, R. Kalogeropoulos, A., Levy, D., & Nielsen, R. K. (2017). Reuters Institute Digital News Report 2017.
Nicol Turner Lee, Paul Resnick, and Genie Barton, “Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms,” The Brookings Institution, May 22, 2019.
Paul Mozur, “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority,” New York Times, April 14, 2019.
Patra, J. P., Sethia, N., & Gupta, P. (2018). Home Assistant Using Artificial Intelligence. Kaav International Journal of Economics, Commerce & Business Management, 5(2), 40-43.
Purtova, N. (2018). The law of everything. Broad concept of personal data and future of EU data protection law. Law, Innovation and Technology, 10(1), 40–81.
Patra, J. P., Sethia, N., & Gupta, P. (2018). Home Assistant Using Artificial Intelligence. Kaav International Journal of Economics, Commerce & Business Management, 5(2), 40-43.
Shinde, S. N. (2016). Data Security Using Cryptic Steganography Approach. Kaav International Journal of Science, Engineering & Technology, 3(3), 16-22. https://www.kaavpublications.org/abstracts/data-security-using-cryptic-steganography-approach
Sara Merken, “Berkeley Bans Government Face Recognition Use, Joining Other Cities,” Bloomberg Law, October 16, 2019; Nikolas DeCosta-Klipa, “Brookline becomes 2nd Massachusetts community to ban facial recognition,” Boston.com, December 12, 2019. Tori Bedford, “Cambridge Votes To Ban Face Surveillance Technology,” WGBH, January 13, 2020.
Stanley Augustin, “Lawyers’ Committee for Civil Rights Under Law and Free Press Action Release Proposed ‘Online Civil Rights and Privacy Act’ to Combat Data Discrimination,” Lawyers’ Committee for Civil Rights Under Law, March 11, 2019.
https:/web.stanford.edu/class/aerchive/cs/cs106a.1188/lectures/lecture26.pdf
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.