Year-end review of a business consultant who would rather build systems
By Christian Schappeit | December 24, 2025
I have to confess something: In September, I wrote an article that upset many people. It was called "Europe's Digital Self-Restraint." In it, I described how we are paralyzing ourselves with GDPR, DMA, and the AI Act while Silicon Valley and Shenzhen build the future. The reactions were... divided. From "Finally someone says it" to "traitor" everything was there. Now, three months later, I’m sitting at my desk again. Between me and the screen, stacks of studies, forecasts, analyses. And I wonder: Should I really write another article? Will anything change? Probably not. But here it is anyway. Because sometimes you have to say things out loud, even if no one wants to hear them.
How CRM Projects Became Compliance Theater
Actually, I advise companies on business systems. CRM implementations for pharma and biotech. Data management strategies that must meet regulatory requirements. Content management systems that document GxP compliance.
The reality in 2025? A typical project now looks like this:
25% System Implementation
15% Change Management
60% Compliance Documentation
I make good money from it. The hours get paid. Compliance is a lucrative business. But is that our goal? Should we really spend 60% of our time documenting why a CRM system is GDPR-compliant instead of building it? I didn’t become a compliance consultant. I wanted to build systems that make companies better. That make sales teams more productive, marketing teams more data-driven, service teams more responsive. Instead, I write data processing agreements, create GDPR documentation, prepare AI Act assessments. And now AI is added. With even more regulatory requirements.
What Really Happened in 2025
Forget the headlines about GPT-5 or Gemini Ultra. That wasn’t the real story. The real revolution happened elsewhere, and most missed it.
AI has become a colleague rather than just a tool.
I don’t mean that metaphorically. I’m talking about Agentic AI – autonomous systems that don’t just answer questions but work independently. They don’t say "here is the analysis," but "I have done the analysis, created the presentation, scheduled the meetings, and will only inform you if something unexpected happens." This is not sci-fi. It’s happening right now. As you read this.
Last week, at one of our pharma clients, I watched an AI agent completely reconfigure lead scoring in the CRM. Independently. Analyzed historical data, recognized conversion patterns, optimized scoring rules, proposed new workflows. The sales operations manager only had to review and approve. Previously, his team would have needed two weeks – between data extraction, analysis, testing. Now: three hours.
And you know what’s crazy? That wasn’t even a particularly advanced setup.
The Moment I Realized We Had Lost
Recently, I sat with a client. Mid-sized company, 200 employees, makes specialized software for the logistics industry. Good guy, smart, committed. He wanted to implement an AI system. Nothing fancy, no space technology. Just intelligent route optimization for their customers.
We talked for three hours. Not a minute about technology. All compliance:
"Is this high-risk under the AI Act?"
"What documentation do we need?"
"What about GDPR?"
"How about liability?"
In the end, he said: "You know what, Christian? Let’s drop it. Too complicated." A year later, I read in the newspaper: A US startup takes market leadership in exactly his niche. With exactly the technology he wanted to implement. That is Europe 2025 in one conversation.
Open Source Changed Everything
Here it gets interesting: While we in Europe worry about how to keep up with the tech giants, the real disruption happened elsewhere.
Open Source AI has caught up. Not a little. Completely.
DeepSeek from China builds models that compete with GPT-4 – for a fraction of the cost. Kimi's K2 Thinking Model challenges Claude and ChatGPT. And these aren’t inferior copies. They are real alternatives. The consequence? The era of "one model to rule them all" is over. Companies now build hybrid stacks. One model for complex reasoning, one for long documents, a local one for sensitive data. Mix and match.
Last week, I had a CTO on the phone who proudly told me that for 90% of their use cases, they no longer need OpenAI. All open source. Self-hosted. GDPR-compliant by design. That’s how it should be. Only: These are exceptions. The rule is unfortunately different.
The Dark Side Nobody Talks About
It’s not all sunshine and innovation. 2025 had its dark sides too. The copyright wars escalate. Anthropic pays $1.5 billion settlement. Apple is sued. Meta is sued. Britannica sues Perplexity. The bill for training on others' content is coming now.
And then the job question. Many don’t want to hear it, but it’s real.
I see it with clients: Coding jobs disappear. Content writers become fewer. Customer service is automated. And now the second wave begins – legal, HR, analyst positions. Gartner did a study that gives me nightmares: "Atrophy of critical thinking skills due to GenAI use." People stop thinking for themselves because AI does it for them.
My neigbors 16-year-old son uses ChatGPT for all his homework. All of it. He doesn’t understand it anymore. But the grades are good. Is that the new generation?
Europe and the AI Act: A Tragicomedy in Several Acts
Let me give you the timeline without all the details. Just the key dates so you understand how absurd this is:
- The banned AI practices have been in effect since February. Okay, fair enough.
- We don’t want social scoring like in China.
- The rules for large AI models came in August. Still acceptable.
But now it gets wild: High-risk AI systems must be compliant by August 2026. Systems developed before August 2025 have until August 2027. Large-scale IT systems can wait until the end of 2030.
Read that again. 2030.
Do you know what happens in the AI world in five years? About three complete technology generations. And then in November came the "Digital Omnibus" – Europe’s way of saying "oops, maybe we were too strict." Deadlines postponed, more sandboxes (but only from 2028!), a bit of real-world testing. Only: It’s too little, too late.
What It Costs Us
With every project, I now see the same calculation. A CRM rollout that we used to do in 12 weeks now takes 20 weeks. A data management project with AI components?
Here’s the bill I have to present to my clients:
- System Implementation: 150,000 euros (was previously the total budget)
- Legal Opinions: 50-200,000 euros (new)
- Compliance Implementation: 100-500,000 euros (new)
- Ongoing Audits: 50-150,000 euros/year (new)
The budget has tripled. Not because the systems got better. But because of regulation. And the real killer: The opportunity costs. The features we don’t build because the budget goes to compliance. The experiments we don’t do because legal uncertainty paralyzes us. The innovations that never happen because it’s "too risky." A venture capitalist recently told me: "Christian, I don’t invest in European AI startups anymore. Too high regulatory risk." That was a German VC. Sitting in Germany.
My projects are getting bigger and more lucrative. For the EU part , they are not better. They are just more expensive and slower.
2026: What Awaits Us
Analysts agree (details in the fact sheet at the end). Trends for 2026:
- Agentic AI will become mainstream. No longer "cool for early adopters," but "business critical for everyone."
- The productivity software world faces the biggest upheaval in 35 years. Microsoft Office, Salesforce, SAP – all will be rethought. Or die.
- Security will become a nightmare. AI-driven cyberattacks faster than any human defender.
- And the paradox: Critical thinking will become rare. Just when we need it most.
- Some companies will reduce staff. Others will massively hire for AI skills. Software engineers and data engineers are the new rock stars.
The market? Continues to grow like crazy. The numbers are absurd (see fact sheet). But growth happens elsewhere. USA, China. Not here.
Three Scenarios for Europe
I’ve thought a lot about how things could go. Three possible futures:
- Scenario one: We continue as before. The AI Act comes, bumpy but it comes. The Digital Omnibus brings minimal improvements. European AI startups remain niche players. We buy American and Chinese technology.
Probability: 70 percent.
Outcome: Europe as a rich but irrelevant consumer of foreign innovation. - Scenario two: There is a political push. Real simplifications. Massive investments in AI. We properly support European champions, not just on paper.
Probability: 20 percent.
Outcome: We catch up. Slowly, but we catch up. - Scenario three: Digital Balkanization. USA goes its way, China its own, Europe another. No more exchange. Each bloc with its own standards.
Probability: 10 percent.
Outcome: Chaos. Nobody wins. But Europe loses the most.
My bet? Scenario one. Status quo. Managed decline.
I hope I’m wrong.
What You Should Do Now
Enough whining. What can you concretely do?
If you run a company:
Stop waiting. The AI Act will change. Again and again. Those waiting for "clarity" are already too late. Start now with low-risk use cases. Build competence. Experiment. And please, please: Rethink your workflows. Not "AI as an add-on," but truly new. From the ground up. That’s the difference between efficiency gains and transformation.
If you want to start a startup:
Think carefully about where you have your legal seat. Delaware with EU presence can make sense. Hurts to say, but it’s the reality. Don’t compete as a generalist. Find a niche. Medical imaging. Legal tech. Manufacturing. Something where you have domain expertise. And look closely at open source. DeepSeek and co. show: It’s competitive.
If you work in politics:
Three requests:
- Stop regulating and start enabling. Outcome-based instead of process-based. Sandboxes that really work. Now, not 2028.
- Invest massively. Really massively. The USA is putting in hundreds of billions. China too. Where is Europe’s AI Champions Fund?
- And: Learn from the AI Act. It can’t be perfect from day one. Grace periods, support, iterative improvement. Don’t come with fines immediately.
If you are a citizen:
- Ask your representatives: "How are you making Europe an AI leader?"
- Not: "How are you protecting us from AI?"
- That’s a fundamentally different frame. And that’s exactly where our problem lies.
Why I’m Writing This
My work is to implement business systems: CRM, content management, data platforms. Systems that make companies more productive, optimize processes, and enable growth. Yet an increasing share of my time is no longer spent on that. Instead, it goes into compliance questions. GDPR assessments. AI Act evaluations. Data protection impact assessments. Legal reviews.
We used to talk about features. Today, we discuss paragraphs. And it frustrates me. Because I see what we could achieve. We have the talent. We have universities that rank among the world’s best. We have industries that are complex and demanding. We have capital. What is missing is the ambition and courage to lead again.
Instead, we hide behind “ethics” and “responsibility.” Both are essential, without question. But they must not become an excuse for inaction.
2026: Make or Break
Next year will decide whether Europe joins the AI starting lineup or stays on the bench keeping score. My sober prediction? We’ll be spectators. One more year of managed decline. More regulation, less innovation. But maybe… maybe something breaks the pattern. A jolt. A moment of clarity. Political courage. Maybe I’m wrong.
If not, I’ll be back at the end of 2026 with the sequel: same article, uglier charts, higher compliance budgets, lower patience.
You’re going to need it.
Christian Schappeit
https://protagx.com
PS: All data, sources, and detailed analyses can be found in the attached fact sheet. Yes, I did my homework. Yes, the numbers are depressing. No, I didn’t make them up.
PPS: This text was researched with AI assistance. I’m aware of the irony. GDPR-compliant, of course.