Post

AI Won't Replace You — But Ignoring It Might

The real debate isn't Human vs. AI — it's Adaptation vs. Resistance. A perspective on AI adoption in the cybersecurity and CTF community.

AI Won't Replace You — But Ignoring It Might

pic

The Real Debate Isn’t Human vs. AI — It’s Adaptation vs. Resistance

Introduction

There’s a quote from Sam Altman, CEO of OpenAI, that perfectly captures the moment we’re living in:

“AI won’t replace humans. But humans who use AI will replace those who don’t.”

This isn’t a threat — it’s an invitation. Yet, in many technical communities, especially in cybersecurity and CTF (Capture The Flag) competitions, AI adoption remains a heated debate. Some see it as cheating; others see it as the natural evolution of how we work. Having recently participated in several CTFs, I’ve witnessed this divide firsthand — and I believe it’s time we reframe the conversation.

The CTF Community: A Case Study in Resistance

CTF competitions are the playground of cybersecurity enthusiasts. They test skills in reverse engineering, web exploitation, cryptography, forensics, and more. The community is passionate, skilled, and — understandably — proud of the craft.

Recently, with the rise of custom MCP (Model Context Protocol) servers and AI-powered tools, a new wave of players has started integrating AI into their CTF workflows. The reaction from parts of the community? Pushback.

Many players prefer to enumerate targets manually, interact with challenges by hand, and solve problems through pure human effort. There’s a romanticism to it — the hacker ethos of “doing it yourself.” And that’s respectable.

But here’s the thing: respecting the craft doesn’t mean rejecting better tools.

The Double Standard: Authors Use AI Too

Let’s talk about the elephant in the room — the hypocrisy.

CTF challenge authors have full access to AI. And let’s be honest: many of them are already using it. AI helps authors:

  • 🧩 Design more complex challenges — Generate intricate logic, layered encryption, and multi-step exploitation paths
  • 🐛 Test and debug their tasks — Quickly validate that a challenge is solvable and free of unintended bugs
  • ✍️ Write challenge descriptions and hints — Polish writeups, generate lore, and structure instructions
  • 🔒 Harden their challenges against easy solutions — Use AI to probe their own tasks for shortcuts

So here’s the question that nobody wants to answer:

How is it acceptable for authors to use AI to create challenges, but unacceptable for players to use AI to solve them?

This is a double standard, plain and simple. If AI is part of the creation process, it should be part of the solving process too. You can’t benefit from a tool on one side of the table and ban it on the other. That’s not fairness — that’s gatekeeping.

AI Can Make CTFs Better

Instead of fearing AI, the CTF community should embrace it as an opportunity to raise the bar.

If authors know that players have access to AI, they’re forced to:

  • 📈 Create harder, more creative challenges — Simple tasks that AI can solve in seconds become obsolete. Good.
  • 🧠 Design problems that require human intuition — Challenges that demand creativity, lateral thinking, and real-world context — things AI still struggles with.
  • 🔗 Build multi-layered tasks — Challenges that combine multiple domains (web + crypto + forensics) in ways that require human orchestration, not just pattern matching.
  • 🎯 Focus on quality over quantity — No more filler challenges. Every task has to be genuinely thought-provoking.

AI doesn’t make CTFs easier. It makes bad CTFs obsolete. And that’s a win for everyone.

The result? A new generation of CTF challenges that are smarter, deeper, and more rewarding — for both humans and AI-assisted teams. The bar goes up. The competition gets better. The learning gets deeper.

It’s Not a War — It’s an Evolution

The mistake is framing AI as an opponent. It’s not Human vs. AI. It’s:

❌ Old Mindset✅ New Mindset
AI is cheatingAI is a force multiplier
Manual = AuthenticSmart tooling = Efficiency
AI replaces skillAI amplifies skill
Playing to winPlaying with intelligence
Ban AI for playersEmbrace AI for everyone

The people who thrive won’t be those who fight against AI — they’ll be those who learn to set up intelligent workflows: choosing the right models, crafting effective prompts, building custom tool integrations, and knowing when to let AI assist and when to think critically themselves.

The Anthropic Effect: AI Meets Cybersecurity

Companies like Anthropic have made remarkable strides in making AI more capable and safer for technical domains. In cybersecurity specifically, AI is now being used for:

  • 🔍 Threat analysis — Rapidly processing and correlating threat intelligence
  • 🛡️ Vulnerability assessment — Identifying weaknesses in code at scale
  • 📝 Report generation — Turning raw findings into actionable reports
  • 🧩 CTF problem solving — Assisting with static analysis, crypto challenges, and more

This isn’t science fiction. This is today.

The Real Skill: Prompt Engineering & Workspace Setup

Here’s what many people miss: using AI effectively is itself a skill.

It’s not about typing “solve this CTF challenge” into a chatbot. It’s about:

  • Prompt Engineering — Knowing how to ask the right questions, provide the right context, and guide the AI toward useful outputs.
  • Workspace Setup — Integrating AI into your existing tools (custom MCP servers, IDE plugins, automation scripts).
  • Critical Thinking — Knowing when the AI is wrong, when to verify, and when to take over manually.
  • Tool Selection — Choosing the right model and configuration for the right task.

The winners in this new era aren’t the ones who blindly follow AI output. They’re the ones who orchestrate it intelligently.

Use AI — But Never Stop Sharpening Your Own Blade

Here’s a truth no one should ignore: the future is uncertain.

We live in a world where geopolitical tensions shift overnight, economies fluctuate unpredictably, technologies rise and fall, and the global situation can change in ways no one anticipates. Today AI is everywhere — but what about tomorrow? What if access becomes restricted? What if the tools you depend on disappear? What if a crisis demands you to operate with nothing but your own mind and a terminal?

That’s why your core skills matter more than ever.

AI should be your co-pilot, not your crutch. If you let AI do all the thinking, you’re not gaining a superpower — you’re building a dependency. And dependencies are dangerous in an unpredictable world.

  • 🧠 Keep learning the fundamentals — networking, programming, system internals, cryptography. These don’t expire.
  • 🛠️ Practice without AI sometimes — Solve challenges manually. Stay sharp. Know that you can do it alone if you ever have to.
  • 📚 Stay curious — The landscape will keep shifting. The people who survive aren’t the strongest or the smartest — they’re the most adaptable.

Think of it this way: AI is like a power tool. A carpenter who only knows how to use a power drill is helpless when the electricity goes out. But a carpenter who also knows how to work with hand tools? They’ll build no matter what.

Don’t just learn to use AI. Learn so well that you don’t need it — and then use it anyway to be unstoppable.

Conclusion: Adapt or Be Adapted Around

AI is not the enemy. Complacency is. And double standards aren’t far behind.

The cybersecurity professionals, CTF players, and developers who embrace AI as a partner — not a replacement — will outperform those who don’t. Not because they’re less skilled, but because they’re more efficient.

To the CTF community: stop banning AI on one side while quietly using it on the other. Level the playing field. Let AI push challenge authors to create better tasks, and let players use every tool at their disposal to solve them. That’s how the entire ecosystem grows.

But never forget: tools come and go. Your knowledge stays. In a world full of uncertainty — politically, economically, technologically — the safest investment you can make is in yourself.

The game has changed. It’s no longer just about what you know — it’s about how intelligently you work, while making sure you never lose the ability to work without a safety net.

So the next time someone tells you that using AI in a CTF is “not fair,” remind them: the authors used it too. The goal was never to struggle more. The goal was always to be smarter.

And the next time you’re tempted to let AI do everything for you, remind yourself: the goal was never to stop learning.

This post is licensed under CC BY 4.0 by the author.