Six months ago, everyone said AI would replace developers. Today, most AI-generated code needs more debugging than writing from scratch.The AI coding revolution was supposed to be here by now. GitHub Copilot would write our applications. ChatGPT would eliminate junior developers. Cursor would make senior developers 10x more productive.
Instead... we’re drowning in plausible-looking code that doesn’t quite work.
AI vendors are racing to offer million-token context windows. But... they're not doing it for your benefit.
Remember when 4K context windows felt limiting? Then 8K arrived. Then 32K. Now we're being sold 128K, 200K and even million-token windows as the solution to all our AI problems. Sadly, it's all a con.
Here's what they're not telling you.
Every message you send doesn't just process your latest prompt. The AI re-reads the entire conversation history - every single token - to generate each response.
Send a 100-word question in a fresh chat? You're using roughly 100 tokens. Send that same question after an hour of back-and-forth? You might be burning through 50,000 tokens for the same answer.
The cost isn't linear - it's exponential. Each message gets more expensive as your conversation grows, and those costs compound fast. Think Cookie Monster but for tokens!
Larger context windows enable sloppy habits. Why bother being precise when you can just dump everything into the conversation and let the AI "figure it out"? Why manage your prompts when the context window is supposedly infinite?
This laziness has a price tag. A substantial one.
A well-structured conversation with clear prompts might cost pennies. That same interaction stretched across a rambling 50-message thread could cost pounds - delivering identical results while training you to be less effective.
AI platform owners understand something most users don't: confusion is profitable.
Larger context windows sound like features. They feel like generosity. In reality, they're revenue optimization strategies that shift costs to users who don't understand the pricing model.
The more context you use, the more you pay - but the pricing structure obscures this reality until your bill arrives.
Effective AI use requires discipline:
This isn't just about cost control. Focused, well-managed conversations produce better results. Shorter contexts mean the AI spends less computational effort tracking irrelevant history and more on your actual problem.
At Lanboss AI, we help organisations implement AI without falling for vendor marketing. Understanding how these tools actually work - and what they actually cost - is fundamental to safe, cost-effective AI adoption.
Larger context windows aren't inherently bad. But treating them as an invitation to lazy prompting and unmanaged conversations is expensive and counterproductive.
TL;DR Million-token context windows are a feature designed to increase platform revenue, not improve your results. Disciplined prompting and conversation management will save you money and deliver better outcomes.
I have been most honoured to be featured in the GCS Leaders Podcast series. David Bloxham and I have a fun and meandering conversation about the myths and realities of AI, where it is today and how to safely use AI to your advantage.
Over the past month ro so we have seen a growing number of articles about AI market over hype and in particular, Microsoft which has pulled back from plans to build new data centres and also from letters of intent relating to purchase of 2 Gigawatts (GW) of energy, representing a significant scaling back. The general feeling would appear to be that AI's energy consumption is already too high and delivering the promises will exasperate that further.
The EU AI Act is the world’s first comprehensive law governing AI, setting a precedent for how organizations across industries must handle AI responsibly. Among its many provisions, a crucial but often overlooked requirement is AI literacy—a mandate that ensures employees working with AI have the necessary skills and understanding to assess AI risks and opportunities effectively.
In today's fast-paced business landscape, AI adoption is emerging as a transformative strategy that mirrors the revolutionary approaches of the past. Much like the 1990s Business Process Re-engineering (BPR) and the 2000s Kanban and Lean methodologies, integrating AI into your operations is about optimizing processes, enhancing decision-making and securing a competitive edge. However, the journey to successful AI implementation is neither instantaneous nor magical.
The AI industry found itself at another potential inflection point with Elon Musk's expressed interest in acquiring OpenAI, the company he co-founded in 2015 before departing in 2018. This development is particularly noteworthy given the complex history between Musk and OpenAI, as well as the current state of the AI industry.
DeepSeek has made waves by delivering high-performing AI models with significantly lower resource requirements than industry giants like OpenAI and Anthropic. Here's how they've done it:
This website uses cookies that help the website to function and also to track how you interact with our website