Artificial Intelligence is changing everything from how we write code to how we defend against cyber threats. But as I asked during my recent talk at Orange Systems, is AI truly our ally, or could it be our next major adversary?
The Dual Nature of AI
At the beginning of the session, I asked the audience:
“Who believes AI is secure?”
No one raised a hand and they were right. AI itself is neither safe nor dangerous. Like a hammer, it can build or destroy depending on who holds it and how it’s used.
We’ve already seen both sides. On one hand, AI helps cybersecurity teams detect threats, automate responses, and write complex detection logic faster than ever before. On the other, attackers now use AI to write malware, generate phishing emails, and even launch ransomware campaigns with almost no technical skills.
As I told the crowd:
“The same tools that defend us can also attack us. It’s not the tool it’s the intent behind it.”
Real Incidents: When AI Turns Rogue
In one case I shared, a large insurance company asked me to test their internal chatbot built with GPT-based LLMs. The bot could draft legal and financial documents but when I intercepted its API calls, I found it could also list server files, access internal passwords, and even delete data on command.
A small permission misconfiguration had turned an office assistant into an insider threat.
In another story, researchers discovered PromptLock, the first known AI-powered ransomware, capable of encrypting files across Windows, Linux, and macOS all generated by an AI model in real time. No human coder was needed.
And then there’s social engineering. Using AI-driven voice synthesis, attackers now impersonate executives to trick employees into transferring millions. I referenced a real case where a finance worker paid $25 million after a video call with a deepfaked “CFO.”
$1408
$1128 or $113 X 10 payments
Windows Security Researcher
Provides the necessary knowledge, understanding, and tools to be a successful Windows OS researcher.
The Other Side: AI for Defense
AI is also an incredible force for good. I use it daily to build EDR detection rules, analyze malware, and create defensive frameworks. During one project, I built over 200 Sysmon and EDR rules in a single hour with AI’s assistance something that would have taken weeks manually.
AI helps defenders scale their capabilities. It can model attacker behavior, detect anomalies, and even train employees through simulated phishing and awareness campaigns.
“AI is not only for the bad. It’s for the good too it depends on how you wield it.”
Awareness Is Half the Battle
Most cyber incidents don’t start with sophisticated exploits they start with a lack of awareness.
Employees use unapproved AI tools, share sensitive data with chatbots, or click phishing links that look perfectly legitimate.
To reduce these risks, I emphasized a few practical steps:
- Test your security tools. Never assume they work, prove it.
- Apply the principle of least privilege. Limit what systems and AI agents can do.
- Implement AI usage policies. Define what platforms can be used and how.
- Educate your people. Awareness is 50% of cybersecurity.
Liked the content?
Subscribe to the free TrainSec knowledge library, and get insider access to new content, discounts and additional materials.
Looking Ahead
We cannot predict the future, but we can recognize patterns. Quantum computing, AI agents, and autonomous systems are all expanding. The next generation of threats will combine them and we must be ready before they do.
“When quantum computing reaches attackers’ hands, passwords will be useless. The time to adapt is now.”
The message I left with the audience was simple: AI is not the enemy, it’s a reflection of us.
Whether it becomes our greatest weapon or our biggest weakness depends entirely on how responsibly we use it.
Let’s continue building awareness, testing our defenses, and sharing knowledge.
Because in the end, cybersecurity is not just about protecting data, it’s about protecting people.