AI Trends for 2026
And a report card for last year’s predictions.
I’ve now been doing these prediction posts long enough to know two things:
The AI landscape moves faster than any sane writing cadence.
The best way to stay honest is to grade yourself.
So, just like last year, let’s start by revisiting the 10 predictions I made for 2025.
Grading 2025’s predictions
Reasoning models get their moment: A. This one landed pretty cleanly (though the writing was already on the wall by late last year). 2025 really did pivot from “bigger models” to “better thinking,” with GPT-5 and its “thinking” variants, Anthropic’s reasoning-heavy Claude releases, and Google’s Gemini “Flash/Thinking” models all being explicitly framed around multi-step problem-solving rather than raw parameter counts.
No GPT-5: F. I said GPT-5 was too overhyped to ship in 2025 and that we’d mostly see refinements of the GPT-4 family. Instead, GPT-5 arrived in August as OpenAI’s new flagship, complete with a router between fast and reasoning modes, and is now the default comparison point in most “which model should I use?” discussions. I whiffed this one.
Better personalization: A-. I argued that “long-term memory” and personalization would be unlocked at the product layer more than the model layer. That’s precisely what we got: ChatGPT’s Memory became a first-class feature, Gemini added its own persistent profile-style context, and Claude deepened workspace-level memory and org knowledge. We’re still debating UX and privacy, but the basic bet - more serious, productized personalization - was correct.
The age of agents: A. I predicted that “agents” would be massively over-marketed but would genuinely start showing up in customer service and sales workflows, and I think that held up pretty well. 2025 was literally branded “the year of the agent” in countless decks, and tools like Intercom’s Fin, Shopify’s Sidekick, and legal platforms like Harvey all leaned hard into the “AI agent” framing, with results that are powerful in some workflows and underwhelming in others.
Multiplayer mode: B. I said we’d move beyond pure single-player AI into more “Google Docs for AI” experiences. Late in the year, ChatGPT group chats shipped, allowing up to 20 people to collaborate with a shared AI in one conversation, and Microsoft and others started pushing similar multi-user experiences. It’s still early and somewhat niche, but we now have real products that look like the multiplayer version of the chatbots we’ve been using solo.
AI gets a major movie/music credit: B. My claim was that we’d see a blockbuster or chart-topper where AI played a crucial creative role. The music side delivered more than film: AI-assisted and AI-generated tracks reached mainstream charts, and the industry has been forced into very public arguments about how to credit AI in songwriting and production. But Hollywood screenwriting credits remain strictly human by guild rules, so we didn’t quite get “Written by Claude” in the final film credits.
Normalized slop AI content: B+. I argued that as long as people didn’t know something was AI-generated, they’d mostly accept it - and that the label was doing a lot of the work. There’s definitely room for debate here, but I think that’s what’s happened: AI-written and AI-edited content is now pervasive in marketing, SEO, product copy, and even some newsroom workflows. On the video side, we saw not one but two “AI TikTok” apps: Sora and Vibes, and although neither has taken off in terms of usage yet, the fact that they’re not blowing people’s minds feels like a story in and of itself.
Regulation gets dialed back (even more): A-. I expected the Trump administration to move away from Biden’s more aggressive federal AI posture, while state-level rules related to deepfakes remained bipartisan. That’s basically what we saw: the new administration has emphasized light-touch, innovation-friendly AI policy, while Congress overwhelmingly passed the bipartisan TAKE IT DOWN Act targeting non-consensual deepfake imagery and similar harms. And just last week, things went to a new level with Trump’s Executive Order preventing states from regulating AI entirely.
Corporate consolidation: C. I argued that ChatGPT/Claude/Gemini were the durable, general-purpose platforms, and that everyone else would either niche down or be folded into them. There was some evidence to back this up: the Windsurf “acquisition” (first with OpenAI, then with Deepmind) and Jony Ive’s io deal come to mind. However, there is still considerable growth and diversity in the model ecosystem - more than I had anticipated.
Investor hype begins to cool: B. I said the real valuation crunch was 18–24 months away but expected to see the early signs in 2025: investors asking harder questions about ROI and sustainability. What happened instead was a peculiar duality: the dollars continued to flow at eye-watering levels, but the narrative surrounding those dollars shifted toward efficiency and infrastructure economics. If 2024 was “fund anything with ‘AI’ in the deck,” 2025 felt more like “fund anything with a plausible path to revenue and a GPU story.”
If you average all of that out, it’s a lot closer to an B than the C+ I gave myself for 2024. I still managed to be confidently wrong about at least one big, obvious thing (GPT-5), which feels important to preserve for humility reasons.
Trends, not predictions
In past years, this is the part where I’d rattle off another list of 10 bets for the coming year: what models will ship, which companies will blow up, which weird corner of the economy gets automated next.
But I’m mixing it up a bit this year. I’m going to take a break from specific AI predictions, and instead, I want to discuss some of the larger trends I’m seeing (or feeling?) at the moment. With any luck, they’ll continue to crystallize in 2026, and I’ll look pretty smart.
Agentic harnesses
2025 was marketed as “the year of the agent,” but in practice, we found that raw model capability was just as crucial as the harness around the model. Claude Code opened my eyes to this: by layering planning, file-system access, and a consistent workflow wrapper on top of a strong model, it unlocked real, repeatable multi-step behavior that feels meaningfully different from “chat with a coder bot.”
We’re now seeing similar harnesses emerge in other domains, from Shopify’s Sidekick to Harvey’s legal assistant. The models are finally good enough that the bottleneck is shifting to how we structure work around them: plans, tools, guardrails, state, and UX. I’m excited to see a lot more progress on the harness side - from IDEs to CRMs to vertical SaaS - without necessarily needing a brand-new model breakthroughs to power it.
Standardized LLM primitives
For the past three years, everyone has been reinventing the same handful of building blocks: web search, code sandboxes, file editing, tool use, memory, and reusable “personas” or prompt setups. They’ve appeared under different names - tools, actions, skills, workflows, GPTs, projects - but they all rhyme.
At this point, the primitive set seems pretty stable. Every serious assistant lets the model call tools, browse, write and run code, read and edit files, remember things about you, and operate within some reusable configuration. What’s missing is standardization: consistent interfaces and shared norms for how these primitives are described and wired together across products and platforms.
The direction of travel seems clear: MCP-like protocols for tools, LLMs.txt-style conventions for content exposure, and more explicit “this is a workflow, not just a prompt” constructs. My hunch is that 2026 will be less about inventing new primitives and more about solidifying the existing ones into something developers can rely on.
Transcending turns
ChatGPT’s group chats are a nice step toward “multiplayer AI”: multiple humans, one shared AI, everyone looking at the same canvas. But underneath, the interaction model is still fundamentally turn-based. Someone types, the model responds, and we take turns passing the talking stick around.
The next step is real-time collaboration between humans and agents. Imagine co-editing a design with an AI cursor moving alongside yours, or a sales team and an AI partner jointly running a live demo without carefully sequenced prompts. We’re starting to see the infrastructure pieces for this (shared canvases, streaming APIs, agent orchestration), but the UX patterns are still embryonic. The moment we get compelling “always-on, always-there” AI collaborators that don’t feel like glorified chat windows, I think it’s going to feel like a pretty big deal.
Political backlash
Two and a half ago, the dominant political question around AI was “how fast can we regulate this?” Now, it feels more like “how loudly can we signal that we’re mad at it?” Between deepfakes, parasocial relationships, and data center build-outs, AI has become a convenient vessel for a bunch of different anxieties: misinformation, pornography, surveillance, psychosis, water usage, energy grids, and “big tech” power in general.
At the same time, there’s now at least one explicit pro-AI political machine in the form of Leading the Future (the $100M pro-AI super PAC), without an obvious, equally organized counterweight on the anti-AI side. That feels unstable. My expectation isn’t some coherent, thoughtful “AI doctrine,” but a messy wave of backlash politics: local fights over data centers, reactive content rules, performative hearings, and eventually a serious anti-AI PAC or coalition trying to turn that sentiment into power.
IPO musical chairs
Finally, there’s the capital markets question. The rumor mill has both Anthropic and OpenAI eyeing IPOs within the next two years, and the broader AI ecosystem is already minting massive public companies in adjacent infrastructure (CoreWeave, etc.).
If the successful IPO window stays open, the rational move is to wait: grow into your valuation, deepen your moat, keep raising private rounds from hyperscalers and sovereign wealth funds, and file when you’re ready. But if the window starts to wobble - macro shocks, rate shifts, an AI backlash that spooks public investors - we could see something closer to the SPAC era: a rush to get out while the getting’s good, followed by a long hangover of underwater AI IPOs.
I don’t know which way that breaks. What I do believe is that the timing and sequencing of a handful of big AI IPOs will do a lot to set the narrative for the “AI decade”: are these disciplined, cash-generating infrastructure businesses, or expensive science projects propped up by irrational exuberance?
Your turn
As always, I’m sure I’ve missed the mark on at least one of these. But that’s the fun of writing about this space - things change so fast that I don’t have to wait long to see whether I’m right or not. And even if I’m completely wrong, I’m looking forward to seeing how things play out.
What are you thinking about for 2026? Share your trends, predictions, and forecasts in the comments!






Great round up. Never a dull moment in the field of AI!
I think 2026 will be the year of "dreaming up and doing the truly amazing." I'm thinking about SciFi kind of things. Beyond agents and workflows, things where we start an AI on a general task without having to prepare an agent for it, and having it work on steps, check in, and continue on the task. I know we have agentic AI that can do some of this now, but not something like "I"m working on an idea for an article about [topic], I need research, I need some graphics, and some summaries. Then I want to start working on the article..." and it just _goes_. Dreaming bigger. Creating more. It's the technological equivalent of "wow this 'wheel' thing you made is cool..." to "what if we put a—what did you call it a 'wheel'—into the river and have it turn and have it thresh our harvest..."