If AI Won’t Kill Your Job, It’ll Kill Your Critical Thinking

It’s 2025. Everywhere you turn, companies shove that they’re integrating AI in their t’s 2025, and everywhere you turn, companies are throwing AI into their newest, shiniest products. Efficient, accurate, high ROI—backed by data-driven results! This Gen-AI tool is the thing that will take your business to new heights, uncover gaps, analyze trends, and help you move forward with AI right behind you.

But let’s be real, haven’t we been using AI for the last 15 years? Apple launched Siri in 2011. Social media companies have been fine-tuning AI-powered algorithms to track our behavior and personalize content for over a decade. Even that $250 Roomba in your living room has AI built in. The difference now? It’s no longer about whether we use AI. It’s about who can afford more of it. At this point, we’re not stopping it. We’re just adjusting how much risk we’re willing to accept.

As a security professional, I think a lot about what that risk actually looks like as AI-powered assistants become more deeply woven into our lives. And here’s something to consider: AI engines, especially those built on largely unchecked LLMs, could end up being a company’s biggest single point of failure. I’m not talking about AI-powered botnets or cybercriminals using ChatGPT to draft phishing emails. I mean the sheer overreliance on AI—getting so comfortable that we stop thinking critically. Companies are rushing to adopt AI, but at the end of the day, it’s just another tool. And like any tool, it can be exploited. I have a feeling that some groups are playing the long game, waiting for the right moment to take advantage of that complacency.

Right now, we’ve collectively accepted a much smarter (and slightly creepier) Siri, social media platforms that post for us, and a robot vacuum that might also serve butter (iykyk). But I don’t want to leave you on a dark note. Here’s the good news: risk acceptance is still a choice. And sometimes, the best lessons come from watching someone else’s mistakes.

Leave a comment