This website uses cookies

Read our Privacy policy and Terms of use for more information.

Issue #8 · May 5, 2026

The current situation around AI is complicated, and it is complicated for everyone — the sellers and the buyers. Anthropic is on track for thirty billion dollars in revenue this year. That is impressive growth.

On the other side, big tech is spending hundreds of billions on infrastructure. The gap between the revenue and the investment is huge. Nobody, including Sam Altman, knows how this is going to play out.

We are the buyers. And we can do something about the situation. The AI companies are absorbing the attention right now. But the question that matters more is what their behaviour means for us. What is the best next move for our company? How do we avoid locking ourselves in too early in the game?

This issue is different, the focus is on how to avoid lock-in. Here is data and some questions you should ask in your team. The goal is to avoid deciding too early for AI company A, B, C or D. Because there might be an E company in the future.

THE NUMBER

13 to 1

For every dollar of revenue, the big AI infrastructure companies are spending thirteen dollars on data centres and compute. OpenAI and Anthropic together are on track to generate roughly fifty-five billion dollars in annualised revenue this year. That shows there is strong demand.

On the other side, last week's earnings calls put hard numbers on the spending. Microsoft is on track for around $190B in 2026 capex. Amazon for $200B. Alphabet raised its 2026 range to $180–190B and warned 2027 would go meaningfully higher. Meta raised to $125–145B. As Evan Armstrong put it: "Datacenter money printer go brrrrrr." Source: The Leverage

The investment bet is the shaky part. But what if this does not work? What if a future, better, and much cheaper model from China provides the same output at a fraction of cost and compute.

In short: We do not know what the future brings. That is why this week's Talk of the Week is about why avoiding AI vendor lock-in just yet is a good idea.

TALK OF THE WEEK

The Google of AI Has Not Been Founded Yet

How to think about vendor lock-in without rushing the decision

The choice that keeps coming up

Here is a guess: you currently sit in AI strategy meetings, a lot. The same question is being discussed in millions of other companies right now. Are we an OpenAI shop? Better to extend our Microsoft contract and go with Copilot? Google Workspace AI? Anthropic?

The way this is framed is good for the vendors. It is not good for the companies, because the race is not decided. Search engines in the 1990s looked like a settled market too — AltaVista, Excite, Lycos, Yahoo, AOL. Then Google arrived. Nothing else mattered.

The next Google of AI may not have been founded yet. The next leap might be different, better, and cheaper.

Why this matters: committing to one vendor now means locking yourself out of whatever comes next.

How lock-in actually happens

Lock-in rarely starts with contracts. It starts with convenience. The companies that signed big AWS contracts in 2012 are mostly still on AWS — not because nothing better was built, but because the switching cost rose every year.

AI lock-in works the same way, one layer up. Once your prompt libraries, evaluation data, and fine-tuning sit inside one provider's infrastructure, switching is no longer a technical decision. It is a rebuild.

There is a second layer. The most capable models are released in closed previews to selected enterprise customers months before they reach general availability. If you are building on a provider's own tooling because you have no alternative, you are also building around their release schedule.

Before you walk into that meeting

Most AI decisions in organisations right now are not really about technology. The IT director has drafted the Copilot rollout plan. The head of communications likes Claude. Someone in finance wants to know why we cannot just use ChatGPT. Walking in with the lock-in argument above will not win that conversation.

What I find more useful, when I am in these conversations, is to slow down and ask people what they actually need. The answers are different across teams, and the architecture follows from the answers. Four questions I keep coming back to:

1. Where in your work is something taking longer than it should? Opens with the person's actual problem, not the technology. Sometimes the bottleneck is a broken process, not a missing tool.

2. What have you tried with AI already, and what worked or did not? Surfaces the experience already in the room. People who tried something and were disappointed are often the most thoughtful contributors, and they get talked over when the room is dominated by enthusiasts.

3. What are you nervous about? The question most people skip. Some are nervous about being replaced. Some are nervous about confidential data leaving the building. Some are nervous about being seen as a luddite. All three need different answers.

4. If a clearly better tool appears in six months, how long would it take us to adopt it? The lock-in question in plain language. It sounds practical, not strategic, so people answer honestly. The answer is usually more useful than a vendor comparison.

None of these questions argue against any specific vendor. They give the room a way to talk about what your company really needs.

The multi-engine approach

The alternative is not to refuse AI. It is to keep the option to switch.

Use APIs. Connect to the currently best system for each specific task. Translation from Mandarin to German is a different problem from summarising legal contracts. Different models will be best for each, and the best will change every few months. The question to ask about any setup: can it swap models in hours, or would that take a quarter?

Keep your prompt libraries, evaluation data, and fine-tuning in places you control rather than inside one provider. Treat the model layer as interchangeable. For now, it is.

One example, from Deutsche Welle

Deutsche Welle uses plain X — one of the three projects I am involved in — for exactly the reason described above. From DW's perspective plain X is middleware. It provides workflows, collaboration, and access to more than twenty-five transcription and translation engines. If a better engine for Mandarin-to-German translation appears tomorrow, DW can switch within hours.

The companies that signed big cloud deals in 2012 are mostly still on AWS. The companies signing big AI deals in 2026 might want to ask whether they want to still be there in 2036.

GOOD TO KNOW

Three releases from the past two weeks suggest the cheaper-technology scenario above is not hypothetical. Betting on a single US provider — on the assumption that capability scales with capex — is a bet against an alternative that already exists:

DeepSeek V4-Pro: Scores within 0.2 points of Anthropic's Claude Opus 4.6 on SWE-bench Verified, at roughly one-seventh of the price. Trained on Huawei Ascend chips rather than NVIDIA hardware. Performance and cost are starting to move independently of each other.

Two more in the same direction: Alibaba's open-weight Qwen 3.6 runs on consumer hardware, and Moonshot's Kimi K2.6 became the first open-weight model to outperform GPT-5.4 on SWE-Bench Pro.

ON THE CALENDAR

WAN-IFRA World News Media Congress · 1–3 June 2026 · Marseille · wan-ifra.org — News publishers gather. AI licensing and platform dependency are the through-lines. RSL and the publisher-AI debate from Issues #2 and #5 will surface here.

TAUS Massively Multilingual AI Conference · 3–5 June 2026 · Rome · taus.net — Framed as Europe's response to US-led AI in translation technology. The main first-half gathering for the API-versus-platform question in language services.

EAMT 2026 · 15–18 June 2026 · Tilburg · eamt2026.org — The main European academic conference on machine translation.

re:publica · 18–20 May 2026 · Berlin · re-publica.com Europe's largest digital society festival. Useful for tracking where the wider digital culture conversation is heading.

BEFORE YOU LEAVE

If you want to try a multi-engine platform this afternoon, you can do that on Mammouth.ai. They are not the only multi-engine provider, but the platform is a good place to test, see the differences between models, and learn something. GPT, Claude, Gemini, Llama, Mistral, plus the Chinese models named above (DeepSeek, Qwen, Kimi). Around ten euros a month.

Of course, by the earlier suggestions in this issue, Mammouth is not the simple alternative — because it is another external platform. But for a test and a check-up the platform works well. If you have another, similar go-to platform, please let me know in the comments.

ABOUT & DISCLOSURE

I am Mirko Lorenz. I work on language technology projects at Deutsche Welle in Germany.

Three projects you will hear about in this newsletter:

  • plain X (plainx.com) — media localisation platform, DW Innovation / Priberam.

  • ChatEurope (chateurope.eu) — AI chatbot network for 15 European news partners.

  • MOSAIC (mosaic-media.eu) — EU DIGITAL EUROPE-funded multilingual media infrastructure.

I cover all three with the same critical lens applied to competitors.

AI use: I use Claude.AI (Anthropic) for research and to edit this newsletter, based on refined and specific prompts. This week I tested Mammouth.ai. My goal is to understand where the AI really performs and where it fails. Responsibility for stated facts, names, and links is entirely mine.

babylon-newsletter.com · Every Tuesday

7,000 languages. AI works for 20.

Reply

Avatar

or to participate

Keep Reading