The IBM AI Risk Atlas (Bagehorn et al., 2025) proposes a categorization of different risks associated with artificial intelligence into 5 categories: Training Data, Inference, Output, Non-Technical and Agentic. Within these categories, and according to their proposed framework, there's a dominance of issues related to governance (6.32%), misuse (6.32%), societal impact (8.42%), and prompt attacks (9.47%). These risks stand out indicating that the most discussed risks are not just technical in nature but also deeply connected to broader social and regulatory concerns, suggesting that as AI systems become more integrated into daily life, the scope of risks associated with them expands well beyond engineering limitations.
A significant aspect of their proposal is the large share of prompt attacks, reflecting the increasing attention on vulnerabilities that manifest during AI deployment. This suggests that even if AI models are robust during training, they can still be susceptible to manipulation through user inputs once in operation. This shift indicates a growing awareness of how AI systems can be compromised in the real world, emphasizing the need for new security measures and risk mitigations during inference.
The prominence of societal impact and governance further highlights the critical shift in how AI risks are perceived. As AI adoption accelerates, the societal consequences like ethical concerns, workforce displacement and amplification of biases, are emerging as central risks. These areas are increasingly viewed as integral to AI’s responsible development and deployment. They are as crucial as technical issues like robustness or accuracy, since the AI field must focus not just on creating effective models but on addressing their broader societal implications.
Thus, the AI Risk Atlas reveals that AI risks are both multi-dimensional and interconnected. Although technical risks such as robustness, explainability, and value alignment are important, the focus is increasingly shifting toward addressing governance and societal issues. To address the growing complexity of AI risks, the industry must prioritize cross-disciplinary collaboration, bringing together technical experts, ethicists, regulators, policymakers and more. In the end, this approach must help ensure that AI technologies are developed responsibly and distributing their benefits equitably across society.