Transparency is a foundational principle for AI governance and effective risk management. When evaluating AI systems, IS auditors must assess not just how an algorithm functions, but also whether its processes and decisions are understandable and explainable. According to the ISACA Advanced in AI Audit™ (AAIA™) Study Guide, "algorithmic transparency—enabling auditors and stakeholders to see and understand the rationale behind automated decisions—is essential for ensuring accountability, fairness, and compliance with both organizational policies and regulatory requirements."
Transparent justification of decisions (option C) directly supports an auditor’s ability to evaluate the appropriateness, logic, and fairness of the AI’s outputs. This is critical in identifying bias, ensuring ethical considerations are met, and verifying that outcomes are consistent with intended objectives. Without transparent justification, it is difficult for an IS auditor to independently assess whether decisions are being made based on valid, legal, and ethical grounds.
By contrast, options A, B, and D are either standard operational aspects or supportive features, but they do not directly empower auditors to evaluate or scrutinize the algorithm’s decision-making process. The Study Guide explicitly highlights, “Effective AI governance frameworks require that systems provide clear, interpretable explanations for their decisions, supporting auditability and accountability across the AI lifecycle.”
[Reference:ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: "Transparency and Explainability in AI Systems," Subsection: "Auditability and Accountability", , ]