AI and Cybersecurity: Some Interesting Thoughts from a Recent Podcast Chat
TL;DR: Attackers build unrestricted AI models while defenders work within ethical guardrails, creating a dangerous asymmetry. Your expertise determines how much AI amplifies your productivity. And sometimes the most sophisticated AI systems still fail because of a default password.
I recently had a great conversation with my good friend and former colleague Ibrahim Yusuf on his podcast, Yusuf on Security. We ended up going down some fascinating rabbit holes about how AI is changing the cybersecurity landscape, it ended up being a longer conversation than we planned around AI security risks and the asymmetric advantage attackers have over defenders.
Here are a few of the most interesting points that came up during our chat.
Attackers Don't Have Guardrails (And That's a Problem)
One thing that struck me during our conversation was just how asymmetric this whole AI thing is. When I use tools like ChatGPT or Claude, they have all these built-in ethical constraints. Ibrahim gave a perfect example, he asked an LLM for code to automatically isolate an infected machine from the network (totally legitimate defensive work), and the AI gave him the code but immediately warned him not to misuse it.
Here's the thing: attackers don't deal with any of that. They're building their own models with all the guardrails turned off. As Ibrahim put it, "the bad guys don't have a law to fight by, they can just go crazy with it and the whole world is their oyster, whereas us as defenders, we've got so many guardrails." In my mind I was thinking those ethical and moral boundaries are important and keep us grounded but at the same time the collaboration and sharing of information is what strengthens us as defenders.
This asymmetry is being documented across the industry. The World Economic Forum recently noted that adversaries are moving faster and experimenting freely with new tools, while defenders are often slowed by bureaucracy, legacy processes and risk aversion (World Economic Forum). Darktrace's 2025 predictions echo this, pointing out that security teams will be slower to adopt AI systems than adversaries because of the need to put in place proper security guardrails and build trust over time (Darktrace).
We're already seeing exploitative models popping up on platforms like Hugging Face. The technology exists for anyone to build unrestricted AI. It's just a question of who weaponizes it faster.
Your Background Actually Matters More Than You Think
Something I've noticed in my own use of AI coding tools is that my 20+ years of experience makes a massive difference in how productive I can be with them. I shared with Ibrahim how my coding background lets me spot immediately when an AI is generating inefficient code or referencing some outdated library.
The crazy thing is, I can get stuck in these loops where the AI tries to work around a problem with another workaround, and it just spirals. But because I understand how code works, I can step back and say, "let's start from scratch and keep it simple." Sometimes its a case of approaching the problem from a different angle.
Ibrahim made a great point about this, your mileage with AI really does vary depending on what you bring to the table. The tool amplifies your existing knowledge. It's not replacing expertise; it's multiplying it.
The DeepSeek Moment: Specialized AI is the Future
We got into an interesting discussion about how the AI landscape is shifting. Ibrahim brought up DeepSeek, this Chinese coding model that achieved amazing results with way fewer resources than something like ChatGPT. The key? It was laser-focused on one thing rather than trying to do everything.
Ibrahim used this great analogy, it's like how you go to a GP for general stuff, but when you need deep expertise on something specific, you go to a specialist like a cardiologist. That's where AI is heading.
What's fascinating is that this dramatically lowers the barrier to entry. Both defenders and attackers are going to have these armies of specialized AI tools. Want one that finds specific types of vulnerabilities? Build it. Need one to analyze security logs for a particular pattern? Build it.
The arms race just got a lot more accessible to everyone. As Akamai noted in their 2024 review, AI will lower the barriers to entry for attackers, accelerating their ability to identify and exploit vulnerabilities (Akamai).
Sometimes It's Still Just a Default Password
Here's something that really brought the conversation back to earth. We were talking about all this sophisticated AI stuff, and then there's the story of the McDonald's vacancy bot.
Researchers were trying to break into this AI-powered system and found the AI itself was actually pretty well secured against attacks. But then they discovered the backend was using a default password. Game over. Access to potentially up to 64 million job applications exposed (I said tens of thousands in our chat, but the actual scale was much larger).
It's a reminder that we can have all the fancy AI security tools in the world, but if we're still making basic mistakes like default passwords, none of it matters. The traditional stuff still trips us up.
Wrapping up on AI Security
What became clear during our conversation is that AI isn't making cybersecurity easier, it's just accelerating everything. The pace of change is getting faster, and the gap between organizations that really understand AI security threats and those that don't is widening.
The attackers building unrestricted models and moving faster than policy can keep up? They're not coming, they're already here. And they're probably already using specialized AI tools we haven't even thought of yet.
If you want to hear the full conversation, check out episode 241 of Yusuf on Security. We covered a lot more ground than I can fit here, and honestly, it's a conversation that raises more questions than it answers - which is kind of the point.
And if you're thinking about your organization's AI security posture, I've put together a free AI Security Maturity Assessment that looks at governance, technical controls, data practices, and employee awareness. Might be worth a look.
Related AI Security Resources: