
A significant gap was revealed between the rapid adoption of artificial intelligence (AI) and the implementation of effective security measures within educational institutions, based on a survey of over 1,400 education leaders in the U.S. and the U.K.
A new report from Keeper Security, "AI in Schools: From Promise to Peril," found that 41% of schools have already been targeted by AI-related cyber incidents, including sophisticated phishing campaigns and the creation of harmful, student-generated content, such as deepfakes.
The report shows that while AI tool usage is widespread—permitted for students in 86% of institutions and for faculty in 91%—formal governance has not kept pace. Most schools operate with informal guidelines rather than concrete policies.
Students are primarily using AI for supportive and exploratory tasks:
This creates significant cybersecurity risks in education. Although 83% of leaders are aware of potential AI risks, such as data leakage and misinformation, their confidence in identifying specific threats is low (25%). “The challenge is not a lack of awareness, but the difficulty of knowing when AI crosses the line from helpful to harmful,” Anne Cutler from Keeper Security told TechNadu.
Sizable gaps in monitoring and awareness were highlighted by the fact that 30% of the respondents confirmed events were “contained quickly,” and 39% were unsure whether incidents had occurred.
Only one in four respondents felt "very confident" in recognizing AI-powered phishing or deepfakes, leaving institutions vulnerable.
The findings highlight a concerning lack of visibility and preparedness. While many confirmed incidents were contained quickly, 39% of education leaders were unsure if their institution had been targeted at all, pointing to substantial deficiencies in threat monitoring and incident awareness.
With nearly all respondents (90%) expressing concern about AI-related threats, the report underscores the urgent need for schools to formalize policies, enhance staff training, and deploy robust security solutions to manage the risks associated with the growing use of AI in schools.
Alex Quilici, CEO at YouMail, said, “The biggest cyber risk to schools is our kids. The reality is that younger generations are the ones being scammed the most. Gen Z in particular is impatient, naive, and easy to trick,” as they rarely question what they are seeing.
“The absence of policy is less about reluctance and more about being in catch-up mode,” Cutler told TechNadu. “When schools combine clear policies with practical support, AI becomes a constructive, trusted resource rather than a source of uncertainty.”
Cutler said administrators can better understand what is happening across their networks by:
Kelvin Lim, Senior Director, Head of Security Engineering (APAC), at Black Duck, also recommends securing the software supply chain, policy enforcement, data protection & privacy, and building a security mindset.
Cutler also emphasized that the media can help raise awareness by covering real-world deepfake and phishing cases, making these threats easier for communities to recognize.
In August, the Black Duck 2025 Embedded Software report findings demonstrated unprecedented AI integration across the embedded software ecosystem.