AI Adoption in Engineering: Lessons from a Year at HiveMQ
From AI Hype to Real Gains: When a Joke Becomes a Mandate
In April 2025, I posted an April Fool's joke about AI in engineering—announcing an "AI-first engineering policy." It was meant as satire: a playful exaggeration of where AI hype seemed to be heading at the time.
Less than twelve months later, parts of that joke have become reality, and exaggeration has given way to adoption. AI tools have moved from novelty to default in many engineering workflows. The conversation has shifted from "should we use AI?" to "how do we use AI responsibly without breaking trust, quality, or reliability?"
As VP Engineering at HiveMQ, I sit right at the fault line between AI hype and real gains. That position has forced us to learn quickly, discard assumptions, and stay brutally honest about what actually works—and what doesn't.
This post distills the most important lessons from a year of real AI adoption at HiveMQ. While gained in an engineering context, they apply far beyond it.
Expectation vs. Reality of AI in Engineering
A year ago, the dominant narrative around AI featured predictions like "Engineers will be 10× more productive," "AI will write most of the code," and "Individual PMs, EMs, or CTOs will replace entire engineering teams." These views were voiced by industry professionals, backed by impressive demos, and reinforced by rapid advances in tooling.
Reality turned out to be more nuanced and more interesting.
AI did not replace engineering judgment, reduce system complexity, or eliminate responsibility. What it did change was friction: shortening feedback loops, reducing exploration cost, and lowering the price of testing out ideas. That distinction, between removing friction and removing thinking, turns out to matter a lot.
Lesson 1: AI Is a Force Multiplier, Not a Substitute
The first and most important lesson is simple: AI amplifies what you bring to the table.
Clear thinking becomes clearer. Strong domain knowledge becomes more powerful. Vague ideas become confidently wrong answers.
At HiveMQ, this became obvious very quickly. AI performs best in well-understood, generic domains, such as standard CRUD operations, boilerplate code, common integration patterns. It struggles precisely where our differentiation lives: distributed systems, protocol semantics, edge cases, and correctness guarantees.
When an engineer with deep knowledge of MQTT session handling uses AI to accelerate implementation, the results are excellent. When someone attempts to use AI to shortcut understanding those semantics, the results look plausible but break under load or in edge cases that matter for production systems.
The pattern repeats across domains. AI is a leverage tool, not a replacement for expertise.
Transferable advice: Invest in clarity before automation. If the input is fuzzy, the output will be fast, but wrong. This applies to engineering designs, product requirements, customer communication, and strategy work alike.
Lesson 2: The Bottleneck Is Moving
Historically, software delivery bottlenecks often sat in implementation. Writing code took time, and execution speed mattered. That's no longer the case.
With AI assistance, execution has become cheap. What became expensive instead is framing the right problem, validating assumptions, integrating changes safely, and building confidence and trust. Coding is no longer the hard part. Deciding what to build, and whether it's correct, now dominates the effort.
This shift has implications for how teams organize. Code reviews now focus less on syntax and style, more on design intent and correctness. Planning conversations carry more weight. The ability to articulate requirements precisely—something that was always valuable—has become essential.
At HiveMQ, we noticed that engineers who spent more time understanding a problem before engaging AI assistance consistently delivered better outcomes than those who jumped straight to implementation. The old advice to "think before you code" has new teeth.
Transferable advice: Spend more time framing the problem than executing the solution. Execution speed without direction is just faster rework.
Lesson 3: Adoption Is Cultural, Not Technical
One surprising outcome: the specific AI tools mattered far less than expected.
The real difference came from behavior. What worked well, we observed, was experimentation with clear intent; explicit skepticism toward outputs; treating AI results as drafts, not decisions; and sharing patterns and learnings across teams. What didn’t work? Blind trust, delegating responsibility to tools, and tool worship. The key is to keep pushing the boundaries, but also be realistic and truthful about the ROI.
The teams that benefited most treated AI like a very fast junior colleague: helpful, tireless, and always in need of review. They used AI to generate options, explore alternatives, and handle tedious tasks, but kept decision-making authority firmly with experienced engineers.
Teams that struggled tended to fall into one of two failure modes. Some avoided AI tools entirely, viewing them with suspicion. Others embraced them uncritically, accepting outputs without verification. Neither approach worked.
The productive middle ground required a specific mindset: curiosity combined with skepticism. That mindset comes from culture. Learning “how to prompt” but also “how to doubt” becomes essential. This reminds us that AI is a learning problem, not a procurement problem.
Transferable advice: Treat AI output like junior work: useful, fast, and in need of review—for now. This framing removes fear and blind trust.
What This Means for HiveMQ and Our Customers
For HiveMQ, AI adoption is not about chasing trends but rather about delivering value faster without compromising reliability. Our customers depend on HiveMQ for mission-critical IoT infrastructure. They run connected vehicles, manufacturing lines, energy grids, and healthcare systems on our platform. "Move fast and break things" was never our philosophy, and AI adoption hasn't changed that.
What has changed is where engineering effort concentrates:
Engineering spends more time on design and correctness
Product iteration speeds up—but clarity requirements increase
Customers benefit from faster delivery while maintaining trust
Efficiency gains emerge without erosion of differentiation
Speed and quality are not opposites—but speed demands higher standards, not lower ones. When you can iterate faster, the cost of shipping something poorly thought-out also compounds faster.
We need to raise quality bars as speed increases. Velocity without quality is just faster failure—this is why we need to re-invest some of the velocity gains into quality. The time saved by AI assistance shouldn't all go toward shipping more—some of it should go toward shipping better.
From Copilot to Colleague—Carefully
AI tools are evolving from copilots into something closer to collaborators. They help with research, drafting, and exploration—and that trend will continue.
Yet ownership remains distinctly human. AI tools do not own decisions, carry accountability, or serve as the source of truth. This matters especially in domains like ours.
When a customer asks why their MQTT broker behaved a certain way, "the AI suggested it" is not an acceptable answer. Engineers own the code they ship. Product managers own the requirements they define. Leaders own the strategies they pursue. AI assistance doesn't change that chain of responsibility.
In AI, planning too far ahead isn’t strategy—it’s speculation. Beyond two quarters, plans are mostly narratives we tell ourselves to feel prepared. The tooling will change, the constraints will move, and yesterday’s “vision” will quietly become today’s technical debt. The only durable advantage is not prediction, but judgment: knowing when to commit, when to adapt, and when to let humans stay firmly in the loop.
Where the Real Advantage Lies
These three lessons touch every aspect of my work, as I care deeply about product quality, engineering culture, and developing high-performing teams. After a year of hands-on AI adoption, one conclusion stands out clearly: The companies that win with AI won't be the ones that automate the most. They'll be the ones that understand the most.
AI accelerates teams that already know what they're doing. It exposes gaps in teams that don't. AI rewards clear intent, judgment, and deep domain knowledge. It’s in these pillars that we anchor our AI-first engineering journey to continue delivering the Industrial AI platform our customers rely on and love.
Those fundamentals, clarity, judgment, domain expertise, have always been HiveMQ's strengths. They matter more now, not less. In a world where everyone has access to the same AI tools, differentiation comes from what you know and how clearly you can apply it.
That's where the real advantage lies.
If you’re adopting AI in mission-critical IoT or industrial systems, HiveMQ teams can help. Contact us today!
