The rapid evolution of modern generative AI architectures (option B) is the largest barrier to explainability.
Complex deep learning models like LLMs, diffusion models, and transformer-based architectures involve millions or billions of parameters, making it extremely challenging to determine precisely how outputs are produced.
AAIA notes that explainability challenges arise because:
Model structures are highly complex
Parameter interactions are nonlinear
Internal representations are not human-interpretable
Continuous updates make documentation outdated
Training data and latent representations create opaque reasoning chains
Bias (A) affects fairness, not explainability.
Stakeholder alignment (C) is a governance issue.
Lack of staff experience (D) is a training problem, not a structural barrier.
The inherent technical complexity and speed of model evolution are the primary obstacles.
[References:, AAIA Domain 5: Explainability Challenges, AAIA Domain 1: Advanced AI Model Architectures, ]