TelcoNews Asia - Telecommunications news for ICT decision-makers
Singapore office soc analysts monitoring compromised alerts

Singapore firms face AI security incidents despite controls

Wed, 29th Apr 2026 (Today)

Proofpoint has released research showing that many organisations in Singapore have suffered AI-related incidents despite adopting AI security controls. The findings point to a gap between the pace of AI deployment and the ability of security teams to monitor and investigate risks.

Among organisations in Singapore, 87% have deployed AI assistants beyond the pilot stage, while 70% are piloting or rolling out autonomous agents. The study surveyed more than 1,400 full-time security professionals across 12 countries, including respondents in Singapore.

The figures suggest AI tools are now part of routine business operations, including customer support, internal messaging, email workflows and third-party collaboration. But security governance has not kept pace with the spread of those systems.

In Singapore, 58% of organisations were not fully confident their AI security controls would detect a compromised AI. Half of those with AI security coverage already in place had still experienced a confirmed or suspected AI-related incident.

Readiness to respond appears weaker still. Only 32% of respondents in Singapore said they were fully prepared to investigate an AI- or agent-related incident, while 51% reported difficulty correlating threats across channels.

Attack surface

Email remains the most common route for threats, cited by 58% of respondents in Singapore. Exposure also extends across SaaS and cloud applications, collaboration platforms such as Teams or Slack, file-sharing services, and AI assistants or agents themselves.

Among organisations that had already experienced an AI-related incident, the spread across channels was broader: 61% of those incidents involved file-sharing platforms and 58% involved collaboration tools.

That pattern matters because AI systems often interact with multiple business tools at once. When incidents move across email, cloud systems, collaboration platforms and automated agents, investigators need a clear view of activity across connected environments to reconstruct what happened.

Tool sprawl is another factor. Nearly all organisations in Singapore, or 98%, said managing multiple security tools was at least moderately challenging, and 61% described it as very or extremely difficult.

Respondents cited integration issues, difficulty correlating threats and visibility gaps as the main obstacles. Those problems can slow incident response at a time when AI systems can spread errors or malicious actions faster than manual processes.

Security gap

The research suggests confidence in business use of AI is outpacing confidence in the controls around it. Although 58% of organisations in Singapore said they had AI security coverage, many still reported weaknesses in training, governance alignment across teams, and monitoring or logging.

Among respondents, 55% identified gaps in training, 45% cited governance alignment problems across teams, and 43% reported insufficient monitoring or logging. Those weaknesses can leave companies unable to determine whether AI systems have been manipulated, are exposing sensitive data, or are acting with inappropriate permissions.

Ryan Kalember, chief strategy officer at Proofpoint, said the findings reflected a broader disconnect between AI roll-out and security readiness.

"This year's findings highlight a widening divide between AI adoption and security readiness.

"Organisations are scaling AI assistants and autonomous agents across core workflows, yet many cannot confirm their controls are effective or fully investigate incidents that move across collaboration channels. As AI becomes embedded in how work gets done, security leaders must rethink how they protect trusted interactions across people, data and AI systems," Kalember said.

Proofpoint argued that AI is not creating an entirely new class of risk so much as intensifying familiar security failures. Those include running untrusted code, mishandling sensitive data and losing control of credentials, but at far greater speed and scale.

"While AI has introduced new risks, such as prompt engineering, its bigger impact has been amplifying the risks we've always had," Kalember said. "Running untrusted code, mishandling sensitive data, and losing control of credentials are the same challenges that humans have created for decades. AI executes them at machine speed and scale. When organisations hand AI the keys to act on their behalf-across customers, partners, and internal systems-the blast radius of any one of those failures grows dramatically. The answer isn't to treat AI as a novel threat category, but to apply rigorous, proven controls to what AI touches, what it runs, and what it's allowed to authenticate as. Organisations that get that foundation right early will scale AI confidently. Those that don't are just automating their own exposure."

Singapore focus

The Singapore results stand out because of the city-state's position as a regional centre for digital investment and AI adoption. Among respondents, 51% are pursuing vendor and tool consolidation, 58% believe a unified platform is more effective than point solutions, and 66% expect to move towards a unified platform approach.

Organisations are also planning broader defensive measures. The study found that 64% intend to expand AI protections and 61% plan to extend security coverage across collaboration channels.

George Lee, senior vice president for Asia Pacific and Japan at Proofpoint, said businesses in Singapore need tighter governance around AI use and data access.

"Singapore is a leading hub in Asia for AI deployment and innovation, but the priority now is to put stronger governance around how AI is used, what data it can access, and how activity is monitored across email, cloud and collaboration platforms," Lee said.

"The organisations that will move fastest and safest will be those that improve data visibility, govern AI agents with the same discipline as privileged users, and reduce the blind spots created by fragmented security tools."