At JvG Technology, the biggest bottleneck in our engineering projects isn’t the engineering itself—it’s the documentation. A machine can be designed, built, and tested in a fraction of the time it takes to clearly and accurately document its every function, specification, and maintenance procedure. This documentation is non-negotiable; it’s the bridge between our engineering team and the clients who operate our solar module production lines worldwide.
For years, this process has been a manual, time-intensive task that pulls senior engineers away from innovation and into Microsoft Word. The challenge is that creating great documentation requires two distinct skill sets: the deep technical knowledge of an expert and the clarity of a skilled writer. Finding both in one person—who also has the time to spare—is rare.
This dilemma led to an experiment: Could a Large Language Model (LLM) act as a junior technical writer? Could we use it to generate a solid first draft that our senior engineers could then quickly review, edit, and approve? The goal wasn’t to replace the expert but to augment them, transforming their role from author to editor.
The Problem with Traditional Documentation
Before diving into the system we built, let’s look at the specific friction points we were trying to solve. In a high-stakes engineering environment, documentation errors aren’t just typos; they can lead to operational mistakes, equipment damage, or safety hazards.
The challenges we faced will likely be familiar to anyone in a technical field:
-
Time Sink: Our experts were spending dozens of hours writing, formatting, and proofreading.
-
Inconsistency: Different engineers use different terminology and writing styles, leading to a disjointed final product.
-
Knowledge Hoarding: Key information often remains with the lead engineer, creating a single point of failure if that person becomes unavailable.
This isn’t just an internal observation. Research from Accenture suggests that Generative AI could automate up to 40% of working hours for highly skilled professionals, and our documentation process felt like a prime candidate.
The Human-AI Collaboration Workflow
We didn’t just give an AI a vague prompt and hope for the best. That approach is a recipe for errors and the dreaded AI ‚hallucinations’—plausible-sounding but factually incorrect statements. Instead, we focused on designing a workflow that treated the AI as a tool, guided by human expertise at every critical step. This is a core principle in building systems for scalability; the system must manage the tool, not the other way around.
Our workflow breaks down into three distinct phases:
-
The Context Packet (Human Task): The engineer assembles a ‚context packet.‘ This isn’t a long, detailed brief; it’s a collection of bullet points, technical spec sheets, schematic diagrams, and key operational parameters. The goal is to provide the AI with all the necessary factual puzzle pieces.
-
The First Draft (AI Task): We feed this context packet to the LLM with a structured prompt. The prompt acts as a template, instructing the AI to organize the information into our standard documentation format, adopt a specific tone, and generate sections for introduction, operation, maintenance, and troubleshooting.
-
Review and Refine (Human Task): The AI-generated draft is returned to the engineer. This is the most critical step. The engineer is no longer staring at a blank page but editing a structured document. Their job shifts from writing to fact-checking, clarifying ambiguities, and adding the nuanced insights the AI missed.
This human-in-the-loop system is essential for another reason: security. A recent Salesforce survey found that 73% of employees believe GenAI introduces new security risks. By keeping our proprietary schematics and core IP out of the initial prompt and using the human expert as the final gatekeeper, we mitigate this risk. The AI organizes public-facing data; the human integrates the sensitive details.
Observations from the Field
We ran this experiment on a documentation module for a new conveyor system. The results were immediate and insightful.
1. A Massive Leap in Productivity
The most striking result was the speed. An MIT study found that AI boosted writing productivity by 40% for college-educated workers, and our experience landed in the same ballpark. A task that previously took an engineer 8-10 hours was reduced to 2-3 hours, primarily spent on review and refinement.
This mirrors findings from a GitHub Copilot study, which showed that developers using their AI assistant coded 55.8% faster. The key in that study, just as in our experiment, is the human acceptance and review of AI suggestions. The productivity gain comes from this collaboration, not from blind automation.
2. The Engineer Becomes an Editor
The cognitive load on our team changed dramatically. Instead of struggling to find the right words, they could focus on their core expertise: ensuring technical accuracy. The AI produced a consistently formatted, clearly written draft that served as an excellent foundation.
The human touch, however, was irreplaceable. The AI would occasionally generate a sentence that was grammatically perfect but technically nonsensical. Our engineer’s role was to catch these subtleties.
Here’s a concrete example. The AI generated a generic maintenance instruction, while our engineer refined it with crucial, experience-based context.
AI-Generated Draft: ‚Periodically check the conveyor belt for signs of wear and tear.‘
Engineer’s Refinement: ‚At the start of each shift, visually inspect the belt for fraying along the edges. Run a gloved hand along the underside to check for gouges, especially after processing abrasive materials.‘
This highlights the core of our finding: AI handles the structure and boilerplate language, freeing up the human expert to add the irreplaceable layer of real-world wisdom.
3. The Power of the ‚Context Packet‘
The quality of the AI’s output was directly proportional to the quality of the input. A lazy, one-sentence prompt yielded a generic, useless draft. A well-structured context packet with clear data points produced a document that was 80% of the way there.
This reinforces a fundamental principle of automating workflows without losing control: your automated systems are only as good as the data and instructions you provide them. Garbage in, garbage out.
Final Thoughts and Next Steps
This experiment confirmed my hypothesis: The role of AI in modern business operations isn’t about replacing skilled professionals. It’s about building systems where they can offload repetitive, structural tasks to focus on high-value work. The engineer’s expertise became more valuable, not less, because it was applied more efficiently.
Our next step is to refine this workflow and deploy it across other departments, including marketing content creation at Mehrklicks and operational manuals for our saddle brands. The principles remain the same: define a structured process, use the AI for the heavy lifting, and empower the human expert as the final arbiter of quality and accuracy.
Frequently Asked Questions (FAQ)
What exactly is an LLM?
An LLM, or Large Language Model, is a type of artificial intelligence trained on vast amounts of text data. It learns patterns, grammar, and information, allowing it to generate new, human-like text in response to prompts. Think of it as an incredibly advanced autocomplete that can write paragraphs, not just words.
Is this process safe for proprietary or sensitive information?
It can be, provided you design the system correctly. We are careful not to include sensitive intellectual property or client data in the ‚context packet‘ fed to a third-party AI. The model generates the structure using non-sensitive data, and our engineers add the proprietary details during the human review phase, which happens entirely within our secure systems.
What AI model did you use for this?
For this experiment, we used a commercially available model—something like OpenAI’s GPT-4 or Anthropic’s Claude. The specific model is less important than the workflow built around it. The principles of providing high-quality context and rigorous human review apply to any powerful LLM.
Do our engineers need to become prompt-engineering experts?
No. Our goal was to create a system that doesn’t require specialized AI skills. We developed a simple, standardized prompt template that our engineers can use. They just need to focus on what they do best: compiling the accurate technical data for the context packet.
What is the biggest mistake to avoid when trying this?
The biggest mistake is to ‚trust but don’t verify.‘ It’s dangerous to trust that the AI’s first draft is 100% correct without a thorough review by a subject matter expert. The system works because it pairs the AI’s speed with a human’s critical thinking and deep expertise. Never skip the human review step.




