TelcoNews Asia - Telecommunications news for ICT decision-makers
Modern singapore cityscape digital locks shields ai security shadowy threats

Singapore firms lead in AI security but face shadow AI threat

Sun, 7th Sep 2025

New research indicates that organisations in Singapore report the highest global readiness for securing artificial intelligence, but also express greater concern about the risks posed by unsanctioned or 'shadow' AI use than any other country studied.

Singapore's readiness

Delinea's report, "AI in Identity Security Demands a New Playbook," is based on a survey of more than 1,700 IT decision-makers worldwide. It shows that 52% of organisations in Singapore believe they are fully equipped to secure AI, compared to a global average of 44%.

Drilling further into the findings, 91% of Singaporean businesses report that they are already using agentic or generative AI in daily operations. Despite this high adoption rate, just 56% say they have governance policies in place specifically for AI identities.

Concerns over shadow AI

The research highlights that unsanctioned AI - referred to as shadow AI - is a major concern for organisations in Singapore. According to the survey, 62% of Singaporean firms encounter shadow AI at least monthly, the highest proportion globally. Moreover, 38% report that these incidents happen multiple times per month.

Despite improved readiness, the visibility into machine identities is not complete. While 97% of organisations believe their machine identity security can keep pace with AI-related threats, only 70% report having full visibility into those identities.

Key security risks identified

The report notes several primary security concerns associated with AI in identity management among Singaporean organisations. The leading issues include AI-generated phishing and deepfakes, identified by 54% of respondents, unchecked access in agentic AI systems (52%), AI-driven credential theft (47%), and shadow AI (46%). Poor visibility into AI access workflows was also cited by 40% of those surveyed.

"Agentic AI demands agentic security. Organisations must rethink how they approach identity, building adaptive, risk-aware systems that verify and secure every action, whether it's human- or machine-driven. AI agents, in particular, require more granular and dynamic identity access controls than the traditional role-based approaches. More broadly, every organization must build out a comprehensive AI governance model to ensure that it's being used securely and as intended," said Art Gilliland, CEO of Delinea.

Gaps in governance

Although the use of AI in daily business is widespread, the Delinea report reveals significant gaps in the governance of these technologies. Just 63% of organisations were found to have an acceptable use policy for AI tools, and only 64% have access controls in place for AI agents. This indicates that despite a confident outlook on readiness, many organisations remain exposed to the risks associated with unsanctioned AI usage and weak oversight.

As AI continues to be integrated into IT and security operations, the report recommends prioritising robust identity governance and implementing adaptive security controls to address emerging threats related to both sanctioned and unsanctioned use.