When AI Becomes the Foundation Instead of the Finish
- David Morin
- 5 days ago
- 5 min read

I finally experienced one of my worst fears with AI in the wild. And yes, I'm going to write about it.
The Setup
I was brought in to help with what I was told was a challenging integration. On the surface, it looked easy — mapping sheets available, some JSON samples, legacy artifacts to reference. I've probably done this type of integration dozens of times. A lot of fields, but I estimated-- a few days' work. I built the interface, got a success message, and moved on. We had about 20 interfaces to get through, an aggressive deadline, and momentum mattered.
Weeks later, I got a message that something was wrong.
What followed was a slow unraveling that I wouldn't wish on anyone.
When things fall apart
The first thing I discovered was that the vendor integration would return a success message if I sent a picture of a cat. The "success" 200 response code along with transactionid meant nothing. The only way to know if data actually landed was to check the vendor portal — but I had to escalate to management just to get access, which took days. And once I had it, there was a significant delay from when I sent a message and when it would appear.
Then came the brute forcing

Leading zeros on material numbers — strip them. Wait. Leading zeros on line numbers too — strip those. A field not mentioned in the spec was mandatory — defaulted. Wait up to 40 minutes. Check the portal. One error surfaces. Fix it. Send again. Wait again. Repeat. None of these are big fixes individually, but the cumulative wait for feedback was brutal.
Eventually I got something to show success on the portal. Done, right?
Not quite. A functional resource noticed by chance data wasn't showing up on the vendor GUI. So after investigating the source data, the mapping, and escalating to management, I got on a call with the vendor and asked for a schema. The response? "What's a schema?"
Oh no!

A few days later I got an email with an OpenAPI spec. It wouldn't load — SAP CI flagged multiple empty objects. I did a full line-by-line review on a 3,000+ field structure, got it working, and started mapping, only to notice fields mentioned in the mapping were missing. When I asked the vendor resource, he told me he had fed the original mapping sheet into an AI agent to generate the schema. It had truncated the full structure and typed many fields as empty objects. And he hadn't reviewed it before sending it to me.
Let me sit with that for a second.
The same bad source we had already identified as bad, fed into AI— and it went out the door without a second look.
To be clear: AI didn't cause the underlying problems here. It just hid them.
The vendor asked an agent for a sample JSON for a common business object, and it delivered exactly what that — which wasn't a real sample message for our purposes. For the AI to produce the schema, the Excel source used didn't have fields typed, so they became empty objects. It won’t load into CI and is meaningless as a schema. AI also removed fields without warning. As I came to understand the data model it was … well strange and recursive. The schema wasn't a universal structure or built for my customer — it was based on someone else's custom implementation and pushed through an AI filter that abstracted and genericized this business object. There had been mapping exercises before I arrived, but without a solid structure, how useful could they have been? This was a rework nightmare.
The vendor had leaned into AI hard enough that fundamentals — like knowing what a schema is, or reviewing output before shipping it — had quietly fallen away. And when I tried to explain the problems, the vendor resource was having AI respond back to me instead of actually listening. They only really heard me when I started sending answers from AI myself. The trouble was they were asking their tools questions without the right context to get the right answers — and when I tried to point them in the right direction, it didn't land until an AI said it instead. Not helping was an inexperienced resource who was treating this iterative discovery approach as normal and thought wanting a schema was a special request.

I'll be honest: I think some of the behavior existed so that if things went wrong, there was AI to blame.
Ultimately, my client didn't sign up for any of this. They bought a product and a service, and they're the ones stuck with the bill and the pain of being a free beta tester. To move things along, I was giving free training and getting into arguments over a basic requirements because my vendor counterpart wasn’t capable.
AI probably looked appealing to the vendor as a way to accelerate delivery and reduce the cost of those pesky expensive integration resources. What it was actually doing was accelerating the appearance of delivery while the real problems stayed buried — and my client, eager to get things done, had no idea the roadblocks were being camouflaged.

What would I tell the vendor?

Start with modernizing and standardizing the data model and documenting the integration approach. Build your solution “integration-first” so you can auto-generate a schema (maybe a real Open-API schema), and have it available before you ever sit down with a customer to do mapping or ask them to integrate. Give your integration partners accurate, timely feedback — not a fake success message and a back-end portal you control the keys to. Don’t restrict access to the portal, technical resources, and key documentation. Hire people who know what a schema is, or teach the ones you have so they don’t get tricked up on basic questions. Provide structurally accurate examples, even if the data is wrong. And remember: unlike me, AI tells you what you want to hear. Sometimes that's a beautiful lie instead of an unpleasant truth. Especially with client facing or project deadline impacting deliverables, a 5-minute review would save days/weeks of headaches and back and forth. Maybe that pesky integration consultant who keeps telling your integrations are broken is helping you, so be nice to him and listen. Finally, no matter how good your product may be, issues like this ruin your credibility and cause reasonable concern over future product support.
Phew…. Rant over

Summary- Building an AI foundation
AI in its current form is best as a finishing tool, not a foundation. It can smooth over rough edges and accelerate real work. But it can't replace the fundamentals, which are still a bedrock for integration success. When it's used unchecked to mask problems, AI makes everything harder to identify and fix. AI can create convincing samples and documentation to move forward with development. The issue has always been that the later a problem is identified in the dev cycle, the longer the fix, so moving forward can be a mistake if the inputs are bad. I can see the intent was to speed things up, but it was short sighted and it turned a 2 day interface and weeks/month painful back and forth. It was all and all a bad experience for everyone.
In the meantime, it's good blog fodder. And while I admit being annoyed, my goal — in the words of Afroman — is to turn a bad time into a good time, and I look forward to laughing at this in retrospect. Maybe this will be a cautionary tale for someone else.
Either way, more integrations to go. Onward. 🚀
If any of this sounds familiar — whether you're a vendor trying to modernize your integration approach or a customer with an integration concern — We offer training and consulting/advisory services. Sometimes the fastest path forward is a few hours with someone who's already been in the weeds. Feel free to reach out and someone on my team will get back to you. info@happyintegrations.work




Comments