The Year AI Grows Up
By Vitalij Farafonov • February 2026
A year ago, I was sitting in an auditorium at MIT’s Computer Science and Artificial Intelligence Laboratory, starting their AI for Senior Executives programme. It was roughly a month after DeepSeek’s R1 release had wiped nearly a trillion dollars off the semiconductor sector. The mood among the MIT academics was not panic. It was puzzlement – not at the technology, but at the market’s violent reaction.
To the researchers, DeepSeek had not invented anything new. It had combined well-known techniques with exceptional engineering discipline to achieve frontier performance at a fraction of the cost. US chip sanctions meant they could not simply throw more hardware at the problem. So they got creative. What stunned me was that the leading Western labs had not done this first. It was clear that in the race to Artificial General Intelligence (AGI) and a world of unlimited capital, efficiency was not such a priority.
Twelve months on, that lesson has only become more urgent. When I wrote here last November, I argued that AI was electricity, not a lightbulb – and that the real opportunity lay in redesigning the factory to use that electricity, not debating which lamp (chatbot) was brightest. Everything since has reinforced that view, while raising the stakes considerably. Big Tech now plans to spend roughly $660 billion on AI infrastructure this year – more than 6x the GDP of Luxembourg. And yet AI services generated only about $25 billion in revenue last year.
It is against this backdrop – record spending, fierce geopolitical competition, a market losing patience – that I offer nine developments I believe will define the next twelve months for businesses. Not all will play out exactly as described. But the direction of travel matters more than the exact coordinates.
One caveat. Two things are true simultaneously. First, for most business applications, today’s AI is already more than powerful enough. The bottleneck is deployment, not capability – and deployment is, at its core, an organisational, governance and cultural challenge, not a technical one. Second, the pace of improvement remains extraordinary. Anyone who forecasts the future of AI based solely on what the technology can do today is making a serious error. But anyone who delays deployment while waiting for the next model to arrive is making an even costlier one.
- AI Agents Have Broken Out of the Developer’s Terminal
2025 was the year AI agents proved they could code. 2026 is the year they have broken into every other function of the business – and beyond. Enterprise software vendors are no longer promising agentic capabilities; they are shipping them. Salesforce, SAP, Microsoft, and Oracle have all embedded AI agents into their core platforms. By year-end, Gartner expects 40% of enterprise applications to include task-specific agents, up from less than 5% a year ago. The agentic shift is reaching every part of the economy.
But this is not limited to enterprise software. Last month Google launched the Universal Commerce Protocol (UCP), an open standard enabling AI agents to handle discovery, purchasing, and post-sale support across the retail ecosystem, backed by Shopify, Walmart, Target, and over twenty other partners. A month prior to that the major technology players launched the Agentic AI Foundation.
What makes agentic AI viable is the collapse in inference cost – accelerated, in part, by the efficiencies that DeepSeek R1 forced the industry to confront. A useful distinction: training is the “education” phase – the expensive, energy-intensive process of building the AI model. Inference is the “career” phase – using it to do actual work, every day. An AI agent does not think once; it loops, reasoning 50 or more times to complete a single task. A year ago, that was prohibitively expensive. Since then inference costs have fallen by roughly 90%, and the quality of output has improved significantly. Agentic AI has crossed the threshold of commercial reality.
- The Personal AI Agent Becomes the Killer App
Despite the explosion of AI tools, many people still think of AI as “that thing that writes weird emails.” This is about to change. A number of companies and startups are racing to build what may prove to be the killer app of this generation: a personal AI agent that manages your everyday life. Not a chatbot you visit with a question, but an ambient intelligence that proactively helps – booking appointments, managing household admin, comparing insurance renewals, learning your preferences over time. Apple has partnered with Google to rebuild Siri around Gemini’s agentic capabilities, with the upgrade expected this spring. Samsung and numerous startups are pursuing similar visions.
One or more of these products will break through this year – not because the underlying technology is new, but because it will finally be packaged in a way that is useful enough, integrated enough, and affordable enough to become indispensable. The best technology doesn’t need to be explained. It’s the kind of technology that people stop noticing because it just works. For businesses, the practical implication is immediate: if your customer experience still requires people to navigate your systems, an AI agent from a competitor will soon offer to do it for them.
- The Graphic User Interface Begins to Dissolve
If agents are proliferating across enterprise platforms, consumer devices, and commerce, the logical consequence is a fundamental rethinking of how we interact with technology. Why navigate seven screens to file an expense report when you can tell an agent to do it? Why browse a retailer’s website when an AI agent can compare prices, check your loyalty points, and complete the purchase on your behalf? We are moving from software designed for human navigation to systems designed for AI-mediated interaction, in both the office and daily life.
The competitive advantage is shifting from the best-designed interface to the best-orchestrated agent experience. If your business depends on complex interfaces or is consumer-facing, AI agents are a direct threat if not treated as an opportunity.
As agents take on higher-stakes actions – executing trades, approving claims, placing orders – the governance question becomes critical. Organisations will need clear escalation thresholds, human-in-the-loop protocols, audit trails, and defined accountability before they let agents act on their behalf.
- Cybersecurity Becomes the Defining Challenge
A necessary grounding before the optimism runs too far. In November 2025, the first documented large-scale AI-orchestrated cyberattack was disclosed. A Chinese state-sponsored group used an AI agent to target roughly thirty organisations – banks, tech companies, government agencies – running the operation almost entirely autonomously, faster than any human team could operate. Separately, North Korean operatives used AI to fabricate identities and secure remote jobs at Fortune 500 companies.
These are not isolated incidents. They are signals of a threat landscape now operating at machine speed. AI-powered attacks are not just faster; they are adaptive, capable of thousands of intelligent requests per second. Quarterly security reviews are no longer adequate. Every board should be asking: does our cybersecurity posture reflect the threat landscape of 2026, or are we still defending against 2023?
- Conversational AI Becomes a Utility
In my November article, I compared debating whether ChatGPT or Gemini was “better” to arguing over lightbulb brands in 1910. That commoditisation is now accelerating, driven by powerful open-weight models – Meta’s Llama, DeepSeek, Alibaba’s Qwen, France’s Mistral – that are competitive with proprietary alternatives on most benchmarks and free to run. Enterprises can now deploy powerful AI on their own servers, with their own data, under their own control, without per-query fees.
In my conversations with European leadership teams, this shift is landing differently than it does in Silicon Valley. For European companies, this is strategically significant. Open-weight models offer a path to AI adoption without structural dependency on the hyperscalers – increasingly relevant as the EU AI Act takes effect and data sovereignty climbs the board agenda.
The implication is clear: over the coming months, conversational AI will be treated more like bandwidth than a premium product. And as I noted in my caveat above, the current generation of models is already more than capable for the vast majority of business applications. The question is no longer the level of intelligence available – it is what you do with that intelligence. The organisations that pull ahead will be those building the best workflows and decision-support tools on top of this commoditising capability, not those still debating which provider to choose. T
- The Focus Shifts from AGI to ROI
For three years, the AI industry has been captivated by the pursuit of Artificial General Intelligence – a system that can match or exceed human capability across any domain. Billions have been poured into this race. But the goalposts keep moving, and there is a genuine question about whether, when AGI does arrive, we will even recognise it. The bubble exists, in part, because investors have priced every AI company as if it were on the verge of delivering AGI, which implies near-infinite future value. In reality, the immediate revenue lies in practical automation. You do not need a ten-trillion-parameter model to summarise a meeting or debug code. You need one that is smart enough, reliable, and cheap to run. We have been building Ferraris to deliver pizza. Now the price of a reliable scooter has collapsed – and most businesses are still delivering on foot.
The practical implication for boards is this: whether AGI arrives in two years or twenty is a fascinating intellectual question, but it is not the one that should be driving your thinking on AI. The question that matters is what return you can generate from the AI that is available today – whether you are using it to reimagine the business model or for operational efficiency. That is not a hypothetical conversation. It is a measurable one. And the organisations that are asking it are already pulling ahead.
- The Real Bottleneck Is People, Not Technology
I made the point in my caveat that deployment is an organisational, governance and cultural challenge, not a technical one. This deserves its own section. Every development in this article depends on organisations having people who can bridge the gap between what the technology makes possible and what the business actually needs. People who understand both sides well enough to judge where AI strengthens advantage, where it creates risk, and where it is simply the wrong tool.
This starts at the top. Board-level AI literacy is no longer optional. Directors do not need to understand transformer architectures, but they do need enough fluency to ask the right questions, challenge assumptions, and set meaningful direction. Beyond literacy, boards must create the conditions for AI to succeed: the psychological safety for teams to experiment; the culture that opens up imagination about what becomes possible; and the sponsorship that any major transformation requires.
The companies pulling ahead are the ones that are treating AI as an organisational change programme. Many AI initiatives fail not because the technology underperforms but because the organisation is not ready to absorb the change. Boards should be asking: do we have the right resources or do we need external support? Have we addressed workforce anxiety? Are our people equipped and motivated to work alongside AI, or are they quietly resisting it?
- The Bubble Bursts for Some – and That Will Be Good for AI
AI will generate enormous value. But not every company currently priced as an AI winner will capture it. When the correction comes, it will be a decoupling: chip companies priced for infinite growth face the bullwhip effect; standalone AI companies burning venture capital with no path to profitability face a reckoning; the hyperscalers, generating most of their revenue from existing businesses, will weather the storm and consolidate.
There is also a possibility that the current bubble is disrupted by a technological leap – new paradigms that push beyond the limits of the current scaling laws. World models that understand causality, not just correlation. Advances in test-time compute that allow smaller models to reason at the level of much larger ones. Multimodal systems that can see, hear, and act simultaneously. If one of these breakthroughs commercialises faster than expected, it would reset valuations just as dramatically – but for different reasons.
The crucial point: a correction, whether driven by markets or by technology, would be a reset of valuation, not value. The underlying utility of AI is not speculative – it is already visible and measurable in thousands of deployments, from customer service automation to code generation to clinical decision support. When a technology becomes cheaper, total demand increases. The efficiency that bursts the bubble will be exactly the thing that accelerates adoption. A correction, if it comes soon, is healthy. If valuations keep inflating, the eventual crash could be systemic. Either way, the utility is permanent.
- Data Sovereignty Reshapes How Multinationals Operate
US export controls were supposed to maintain AI dominance by restricting China’s access to advanced chips. Constraint bred creativity: Chinese labs have produced models rivalling US frontier systems using older hardware. The sanctions did not cripple Chinese AI. In some respects, they sharpened it.
The logic of what comes next is straightforward. Generative AI rests on three inputs: mathematics, compute, and data. Chinese researchers have demonstrated world-class strength in the mathematical foundations of AI, and they have found creative ways to compensate for compute restrictions. That leaves data – the vast accessible repositories of text, code, and knowledge produced overwhelmingly in the US and Europe, on which the world’s models are trained – as the remaining strategic variable. The next frontier of competition will not be chips. It will be restrictions on training data, model weights, and the flows of information that feed AI systems.
For multinational companies, this is not geopolitical commentary. It is a compliance and operational reality. As data sovereignty regimes harden across the US, China, the EU, India, and the Gulf, global organisations may need to run separate AI stacks for different markets. This is already emerging in financial services and telecoms. Any board with significant operations across multiple jurisdictions should be scenario-planning now.
What This Means for You
I want to be direct. If you are a board member or senior executive, there is a reasonable chance your honest reaction to much of this has been to nod politely and tell colleagues you are “monitoring the technology closely.” I understand that reaction. But monitoring without acting is how organisations fall behind, not dramatically, but quietly, one quarter at a time.
Consider what has happened in just twelve months. An AI autonomously orchestrated a cyberattack on thirty organisations. A Chinese lab matched frontier US models at a fraction of the cost. Enterprise software vendors began shipping AI agents as standard. Apple partnered with Google to rebuild Siri as an autonomous assistant.
A year ago, much of this would have sounded like science fiction. We are now living in an age where science fiction becomes fact on a quarterly basis. And every part of your business ecosystem is responding: your customers are beginning to expect AI-enhanced experiences, your competitors are deploying it, your employees are forming opinions about it, and your suppliers are embedding it into their own platforms. The question is no longer whether AI is relevant to your business. It is whether you are shaping how it arrives or letting it arrive on someone else’s terms.
Companies that deployed AI in early 2025 have already built learning advantages that are difficult to replicate. Those still running pilots are falling further behind each month. “Monitoring” is not a strategy. It is comfort without progress.
The thread connecting all nine of these developments is a single shift: from hype to utility. From building the brain to using it. From chasing AGI to delivering ROI. AI is not, at its core, a technology challenge. It is a decision-making challenge – and, as I have argued throughout, an organisational one. The capability is here. The cost has collapsed. The only remaining variable is leadership and clarity of vision.
In November I closed by saying the ultimate value of AI would be unlocked not by algorithms, but by human imagination. That remains true. But the window for waiting has closed.
Vitalij Farafonov is a Founding Partner and CEO of foremost.ai – applied intelligence for the boardroom. Based in Luxembourg, he helps boards and leadership teams cut through AI noise, focus on what matters, make confident decisions and create measurable value through execution. He is also an experienced non-executive director, investor, and strategic advisor.
Disclaimer: The views and opinions expressed are those of the authors and do not necessarily reflect an official policy or position of AmCham.lu. Any content provided by our interviewees are of their opinion, and are presented in their own words.

