I’ve noticed the conversation around AI in business is missing the point. It’s almost entirely focused on tools, features, and the promise of efficiency. The SERPs are dominated by listicles comparing SaaS solutions and by corporate pages from giants like Zendesk and Salesforce promising revolutionary cost savings.
And they’re not wrong—the technology is powerful. Research shows that up to 95% of companies plan to use AI in their customer service by 2025, so the adoption wave is already here.
But the real work isn’t choosing a vendor. It begins after you sign up, in the quiet, difficult process of deciding where the machine ends and human judgment begins. This is a design problem, not a purchasing one.
This is the gap I’m exploring. While the market talks features, leaders and system builders are quietly asking much harder questions:
- How do we integrate this without fracturing our team’s workflow and culture?
- What are the real trade-offs between machine speed and human context?
- Where, precisely, do we draw the line?
These are the underlying concerns I see everywhere, yet they are almost entirely unaddressed in most of what’s being published. The primary challenge isn’t a lack of tools; it’s a lack of proven frameworks for collaboration.
My work across JvG Technology’s global engineering projects and Mehrklicks’ marketing systems isn’t about replacing people. It’s about building hybrid loops where automation handles scale and humans provide nuance. We’re creating systems where, as one Zendesk study noted, 79% of support agents feel that AI copilots genuinely help them deliver better, more human service.
This corner of my site is a logbook of that process, documenting my experiments in blending automated systems with human oversight. It’s not about which tool is best. It’s about judgment.
My Framework for Exploration
To structure these experiments, I frame them across several core dimensions. Each note or project update you find here will likely touch on one of these areas, as they represent the critical friction points in any human-automation system.
Workflow Balance: Where to Draw the Line
The first and most fundamental question is allocation. Which tasks are purely computational and belong to a machine? Which require the empathy, creative problem-solving, or contextual understanding only a human can provide?
In our sales process at Mehrklicks, we tested automated lead scoring. While the model was fast, it couldn’t grasp the subtle intent behind a prospect’s question. This is a constant balancing act—finding the equilibrium between machine efficiency and human effectiveness.
AI Collaboration: Beyond Simple Assistance
An AI can be more than a task-doer; it can be a collaborator. The challenge is moving from a „black box“ that gives answers to a transparent partner that shows its work.
We’re experimenting with training internal AI assistants that don’t just fetch data but explain the context behind it. This builds the trust needed for true collaboration and addresses the core challenge of context loss that plagues so many AI implementations.
Human Oversight: Building Feedback Loops, Not Cages
Every automated system I build has a human in the loop—not as a micromanager, but as a strategic overseer. The goal is to design elegant feedback systems where someone can easily review, correct, or override an automated decision.
This is critical for resilience. When an automated workflow breaks, and it will, recovery speed depends entirely on the quality of its human oversight loop. This is about designing for failure, not just for optimal performance.
Ethical & Cultural Effects: How Automation Shapes People
Introducing automation doesn’t just change a workflow; it changes the team. It can either foster curiosity or kill it. If a system removes all friction and challenge, it can make people passive.
We saw this happen in a data analysis process and had to redesign it to prompt human investigation rather than simply delivering a final report. The goal is to build systems that augment human intelligence, not just replace human effort.
Failure Analysis: Learning from What Breaks
I learn more from a system that fails than one that works perfectly on the first try. That’s where the most valuable insights are found. When an AI predicts customer churn that never happens, or a workflow gets stuck in an infinite loop, it’s an opportunity.
It reveals a flawed assumption in the initial design. I document these failures transparently because they are the most potent teachers in system design.
Design Principles: Architecting for Nuance
From these experiments, core principles emerge. For instance, „Automate the predictable, not the pivotal.“ Or, „A good system makes its human operator smarter.“ These principles become the foundation for the next system, creating a cumulative learning cycle. It’s the difference between simply implementing tools and architecting intelligent, human-centric operations.
Questions I’m Wrestling With
This is an ongoing process of inquiry. The questions below guide my thinking and experimentation.
Isn’t the ultimate goal to automate as much as possible for maximum efficiency?
No. The goal is maximum effectiveness, and efficiency is just one component. A fully automated system that alienates customers or demotivates a team is a failure, no matter how efficient it is.
Research shows 64% of consumers trust AI more when it demonstrates empathy—something that requires a human touchpoint. The most effective systems I’ve built are hybrids that use automation to free up human capacity for higher-value, more empathetic work.
How do you get a team to trust—and adopt—new automated systems?
By making them co-designers, not just end-users. Trust isn’t built by mandates; it’s built through transparency and control.
When the team understands how the system works, why it makes certain decisions, and how they can override it, they see it as a tool that serves them, not a black box that dictates to them. We don’t deploy systems; we run collaborative experiments.
What happens when automation fails and impacts a customer?
It’s not a question of „if“ but „when.“ The key is designing for graceful failure. This means having immediate human alert systems, clear protocols for intervention, and most importantly, a culture where the focus is on solving the customer’s problem first and diagnosing the system second.
A resilient system assumes failure and makes recovery fast, transparent, and human-led.
This is my open logbook for navigating these challenges. The notes and project updates are snapshots from my work building systems that scale. I hope they provide a more grounded, honest perspective on what it truly takes to combine human insight with machine intelligence.
Every automated process is a mirror of its designer’s judgment. The goal is to refine that judgment.




