There’s a strange mix of excitement, skepticism, and quiet anxiety in the air right now.

The AI world is moving fast. Too fast, if we’re being honest. Blink, and there’s a new model, a new agent, a new workflow everyone else seems to have already mastered. Keeping up feels less like learning and more like sprinting on a treadmill that keeps accelerating.

My own journey with AI didn’t start as an “early catalyst” story. It started with burnout.

Last year, I found myself stuck in the loop many of us know too well where daily work consumes all mental space, nothing left for exploration outside the scope of the job. Every day I told myself, “Tomorrow I’ll pick up AI.” And every next day, I woke up to headlines that made me feel like I was already too late. The progress felt impossible to catch up with, and the AI FOMO was very real.

What changed wasn’t speed. It was intent.

I didn’t suddenly become an AI expert or a thought leader overnight. I made a deliberate, humble choice to start writing every day, experimenting, and learning just enough to quiet the noise. Over time, I found my rhythm. I learned how to tame the chaos instead of chasing every new thing. I wasn’t an early catalyst, but I got onto the bandwagon by showing up consistently.

The Collective Identity Crisis

And I still see many people struggling.

Not the healthy skepticism around AI ethics or privacy. That’s important and necessary. I’m talking about a deeper resistance. The kind that comes from having lived through multiple “next big things” from crypto to countless tech booms and feeling tired. Tired of adapting. Tired of reinventing. Wanting something, anything, to stay constant.

There’s an identity crisis unfolding, and it looks different depending on where you stand.

For the older generation, it’s about stability. A desire for something familiar in a career that has already demanded so many rewrites.

For mid-level folks, there’s guilt. Am I cheating by using AI? What will people think? Am I still valuable if I rely on these tools? There’s a quiet fear of being replaced by the very thing that’s helping them keep up.

For juniors, AI feels like a superpower and it is. But from the senior side, there’s concern. Junior developers can now move fast, sometimes too fast. Mistakes scale. And mentoring becomes harder when the pace of output outstrips the pace of understanding.

The Dunning-Kruger Effect and Systems Thinking

This is where the Dunning–Kruger effect feels more relevant than ever. We don’t know what we don’t know and large language models amplify that gap. They work on the context they’re given. They burn tokens. They make us feel productive. Sometimes they send us in circles.

Training matters.

System design context matters. Not just for seniors, but especially for juniors. And system design is not something you truly learn from theory alone. It sticks when you’ve designed something yourself, broken it, or lived through its consequences. Practical, hands-on experience is non-negotiable.

What is interesting and promising is how we rethink training. I’ve been exploring ideas like asking LLMs to explain system design concepts as if to a 10-year-old. Visual explainers. Simple mental models. Let juniors experiment, learn, apply, and then reflect on those learnings at work. The goal isn’t speed. It is systems thinking.

Finding the Leverage

Personally, I don’t feel an identity crisis.

If anything, as a manager, this era feels like gaining superpowers. Where I could previously do five things sequentially, I can now do ten in parallel by running different agents. I’ve effectively designed a system around myself. My role hasn’t diminished, it has expanded. Judgment, prioritization, context-setting, and human decision-making matter more, not less.

For anyone looking to start, my advice is simple: stop debating whether AI is real or here to stay. That debate is already outdated. The real work is figuring out what works for you. Your workflows. Your learning style. Your boundaries.

AI isn’t here to replace intent. It’s here to amplify it.

The Paradox of Efficiency

Over the weekend, as these thoughts kept circling in my head, I came across two LinkedIn posts that finally helped me clear the fog. I’m quoting them here because they capture something essential:

Work is always relative to what others are doing, using available tools. To be best in the industry, you need either higher-quality output than everyone else, or more output than everyone else.

If everyone uses better tools, you either master them better or you simply work more. AI doesn’t change this.

And this one especially stayed with me:

By the end of 2026, no one will be talking about replacing engineers with AI—just like no one talks about replacing engineers with CI/CD anymore. It’s a tool that helps us move faster and cheaper, and the same patterns apply.

Bigger parking lots encourage more people to drive. Wider lanes increase traffic. Energy-efficient homes often lead to more energy consumption, not less.

AI isn’t reducing demand for engineers. If anything, it’s increasing it.

AI doesn’t end the need for humans. It raises the bar on how we think, learn, and lead.

And maybe that’s the real evolution. Not of technology, but of identity.

Leave a comment

Trending