There's a moment I keep coming back to. I was sitting in my room late one night, debugging a piece of code that just wouldn't cooperate. I'd been at it for hours, cycling through the same frustrating loop of trial and error. And then it hit me — what if the computer could just figure this out itself? Not in the way autocomplete suggests the next word, but actually understand what I was trying to accomplish and work backwards from there.
That thought has stayed with me ever since. It's the seed of something much bigger than a late-night coding session. It's the fundamental question driving the next era of artificial intelligence: can we build systems that don't just process information, but genuinely act on our behalf?
Beyond the Chatbot
When most people think about AI today, they picture chatbots. You type a question, you get an answer. It's impressive, certainly. These large language models can write poetry, explain quantum physics, and help you draft emails. But there's something fundamentally passive about them. They wait for your input. They respond. They wait again.
Agentic AI flips this entire paradigm on its head. Instead of waiting to be asked, an agentic system takes initiative. You give it a goal — something like "help me plan a trip to Japan" or "find and fix the bugs in this codebase" — and it goes off to actually accomplish that goal. It breaks the problem down into smaller tasks. It decides which tools to use. It handles unexpected obstacles. It asks for clarification only when genuinely necessary.
The difference might sound subtle, but it's actually enormous. It's the difference between a calculator and an accountant. Between a map and a guide who knows the terrain.
Why Now?
People have been dreaming about autonomous AI systems for decades. So what's changed? Why does this suddenly feel achievable?
The honest answer is that several technologies have matured at exactly the right moment. Large language models have developed an almost uncanny ability to understand context and intent. They can parse ambiguous instructions in ways that would have seemed like science fiction five years ago. Combine this with advances in reasoning — the ability to break complex problems into logical steps — and you have the cognitive foundation for genuine autonomy.
But there's another piece that's equally important: tool use. Modern AI systems are learning to interact with the digital world in the same way we do. They can browse websites, write and execute code, manage files, send emails, interact with APIs. Each of these capabilities might seem mundane in isolation, but together they create something remarkable. An AI that can think and act in the real world.
I've been experimenting with this myself. Building small agentic systems that can take a vague description of a website and actually create it. Not just generate the code and hand it back to me, but make design decisions, test different approaches, refine based on what works. The first time I saw one of these systems iterate on its own mistakes — recognising that something wasn't quite right and trying a different approach without any input from me — I genuinely felt like I was glimpsing the future.
The Challenges We Don't Talk About
Of course, it's not all smooth sailing. There are real challenges here, and I think the honest conversations about agentic AI don't shy away from them.
The first is reliability. When a chatbot makes a mistake, you can just ask it to try again. But when an autonomous system makes a mistake halfway through a complex task, the consequences can cascade. It might make decisions based on faulty earlier assumptions. It might take actions that are difficult to undo. Building in the right safeguards — ways for the system to check its own work, to escalate uncertainty, to fail gracefully — is genuinely hard engineering.
Then there's the question of trust. How do you verify that an agentic system is actually doing what you intended? You can't watch over its shoulder for every decision. You need new ways of auditing, of setting boundaries, of maintaining meaningful human oversight without eliminating the efficiency gains that make autonomy valuable in the first place.
And perhaps most fundamentally, there's the alignment problem. How do we ensure these systems pursue goals that actually match human values? This isn't a new question in AI research, but it becomes more urgent when systems are taking real actions in the real world. A chatbot with slightly misaligned values gives you a weird answer. An agentic system with misaligned values might actually do something harmful.
What Excites Me
Despite these challenges — or maybe because of them — I find this field incredibly exciting. We're not just building better tools. We're exploring a genuinely new relationship between humans and machines.
Think about what becomes possible when AI systems can work alongside us as genuine collaborators. A researcher who can delegate literature reviews to an AI assistant that actually reads and synthesises papers, identifies contradictions, spots gaps in the evidence. A developer who can describe a feature in plain English and have an AI colleague implement it, test it, and deploy it. A small business owner who gains access to the kind of operational efficiency that was previously only available to large corporations with dedicated staff.
The democratisation potential here is staggering. Agentic AI could be the great equaliser, giving individuals and small teams capabilities that rival large organisations.
But I think the more profound shift is philosophical. For the first time in human history, we're creating entities that can pursue goals autonomously. Not in the narrow, brittle way that traditional software does — following rigid rules without any flexibility — but in a genuinely adaptive, intelligent way. That's a remarkable thing. It raises questions about agency, about responsibility, about what it means to delegate decision-making to non-human systems.
Where I'm Heading
This is why I'm building Kaer. I want to be part of shaping this future, not just observing it. The specific domain I've chosen — using agentic AI for web creation — might seem narrow, but I see it as a proving ground for much bigger ideas.
When an AI system can take a vague description of what someone wants and turn it into a functional, beautiful website, it has to solve all the core problems of agentic AI. It has to understand intent. It has to make aesthetic and technical decisions. It has to iterate, test, refine. It has to handle the unexpected. And it has to do all of this while maintaining a coherent vision of what the human actually wanted.
Every day I work on this, I learn something new about how intelligence works. About how goals get translated into actions. About the gap between what we say and what we mean. These are ancient questions, really. Philosophers have been wrestling with them for centuries. But now we're answering them in code.
The future of agentic AI isn't predetermined. The systems we build, the safeguards we implement, the values we encode — these are choices we're making right now. I find that responsibility both daunting and exhilarating. We're not just predicting the future. We're building it.
And honestly? I can't imagine doing anything else.