Adaptive AI Governance in Regulated Enterprise Data Platforms: A Trust-Calibrated Automation Framework

Authors

  • Suman Reddy Gaddam

Keywords:

AI Governance Framework, Algorithmic Risk Management, Explainable AI Compliance, Autonomous Decision Systems, Regulatory AI Automation

Abstract

Artificial intelligence (AI) has become foundational to enterprise data platforms in regulated industries, including financial services, healthcare, and compliance-sensitive digital ecosystems. While AI automation improves spotting unusual patterns, making predictions, and scaling operations, giving more decision-making power to algorithms adds challenges in governance, regulatory risks, and overall system safety. Traditional governance methods that depend on fixed rules or after-the-fact checks are not enough for environments where AI is making decisions, as they fail to account for the dynamic nature of AI systems and the need for real-time oversight and adaptability to changing circumstances, particularly in light of the complex challenges posed by algorithmic bias and regulatory compliance in sectors like healthcare and finance. The Trust-Calibrated Automation (TCA) Framework provides a clear method for handling AI that changes how much automation is used based on the specific risks, rules, and financial importance of different decision-making situations. The framework has various control levels, a method to assess overall risks, systems that focus on important issues based on trust, and elements that make sure the design fixes known problems in AI systems, like algorithmic bias that led to a 50% lower identification of high-need Black patients compared to equally sick White patients in healthcare risk prediction.

DOI: https://doi.org/10.17762/ijisae.v14i1s.8126

Downloads

Download data is not yet available.

References

Saleema Amershi et al., "Software engineering for machine learning: A case study," IEEE, 2019. Available: https://doi.org/10.1109/ICSE-SEIP.2019.00042

Anna Jobin et al., "The global landscape of AI ethics guidelines," Nature Machine Intelligence, 2019. Available: https://doi.org/10.1038/s42256-019-0088-2

Andrew D. Selbst et al., "Fairness and abstraction in sociotechnical systems," ACM Digital Library, 2019. Available: https://doi.org/10.1145/3287560.3287598

Jenna Burrell, "How the machine 'thinks': Understanding opacity in machine learning algorithms," Big Data Society, 2016. Available: https://doi.org/10.1177/2053951715622512

John D. Lee and Katrina A. See, "Trust in automation: Designing for appropriate reliance," Human Factors, 2004. Available: https://pubmed.ncbi.nlm.nih.gov/15151155/

Raja Parasuraman and Victor Riley, "Humans and Automation: Use, Misuse, Disuse, Abuse," Human Factors, 1997. https://doi.org/10.1518/001872097778543886

Kevin Anthony Hoff and Masooda Bashir, "Trust in automation: Integrating empirical evidence on factors that influence trust," Human Factors, 2014. Available: https://doi.org/10.1177/0018720814547570

R. Parasuraman et al., "A model for types and levels of human interaction with automation," IEEE, 2000. https://doi.org/10.1109/3468.844354

Marialena Vagia et al., "A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed?" Appl. Ergonomics, 2016. https://doi.org/10.1016/j.apergo.2015.09.013

Philippe Artzner et al., "Coherent measures of risk," Mathematical Finance, 9: 203-228, 2001. Available: https://doi.org/10.1111/1467-9965.00068

Basel Committee on Banking Supervision, "International convergence of capital measurement and capital standards: A revised framework," Bank International Settlements, Basel, Switzerland, 2005. Available: https://www.bis.org/publ/bcbs118.pdf

Chuan Guo et al., "On calibration of modern neural networks," Proc. 34th Int. Conf. Mach. Learn. (ICML), 2017. Available: https://arxiv.org/pdf/1706.04599

Balaji Lakshminarayanan et al., "Simple and scalable predictive uncertainty estimation using deep ensembles," 31st Conference on Neural Information Processing Systems, 2017. Available: https://proceedings.neurips.cc/paper_files/paper/2017/file/9ef2ed4b7fd2c810847ffa5fa85bce38-Paper.pdf

Scott M. Lundberg and Su-In Lee, "A unified approach to interpreting model predictions," Advances Neural Inf. Process. Syst. (NeurIPS), 2017. Available: https://www.semanticscholar.org/reader/442e10a3c6640ded9408622005e3c2a8906ce4c2

Marco Tulio Ribeiro et al., "Why should I trust you?: Explaining the predictions of any classifier," Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. Available: https://doi.org/10.1145/2939672.2939778

Jie Lu et al., "Learning under concept drift: A review," IEEE Trans. Knowl. Data Eng, 2018. Available: https://doi.org/10.1109/TKDE.2018.2876857

Ziad Obermeyer et al., "Dissecting racial bias in an algorithm used to manage the health of populations," Science, 2019. Available: https://doi.org/10.1126/science.aax2342

M. C. Paulk et al., "Capability maturity model, version 1.1," IEEE Software, 1993. Available: https://doi.org/10.1109/52.219617

Brent Daniel Mittelstad et al., "The ethics of algorithms: Mapping the debate," Big Data and Society, 2016. Available: https://doi.org/10.1177/2053951716679679

Peter Kairouz and H. Brendan McMahan, "Advances and open problems in federated learning," Foundations and Trends in Machine Learning, 2021. Available: https://doi.org/10.1561/2200000083

Downloads

Published

14.02.2026

How to Cite

Suman Reddy Gaddam. (2026). Adaptive AI Governance in Regulated Enterprise Data Platforms: A Trust-Calibrated Automation Framework. International Journal of Intelligent Systems and Applications in Engineering, 14(1s), 96 105. Retrieved from https://www.ijisae.org/index.php/IJISAE/article/view/8126

Issue

Section

Research Article