top of page
Search

From PO BPMs to Integration Suite: A High-Speed Migration Guide

  • David Morin
  • Feb 24
  • 6 min read

Been a while since I posted, but figured I would share this recent experience. I was handed a rare challenge: a client with some critical NetWeaver Business Process Management (BPM) flow that needed to be migrated to SAP Cloud Integration (CI)—ASAP.


A consultant rising to the challenge
This definitely looks just like me

We all know the consultant’s disclaimer: direct migration for BPMs isn't a thing. Ideally, you’d take the time to redesign and modernize the process as a whole to fit the Integration Suite paradigm. But business reality sometimes trumps "best practice." After pitching modernization, I conceded. Move now, fix later, but do it fast.


So, how do you migrate a complex orchestration in record time without it blowing up in your face? Here is my roadmap from the trenches.


The Strategy: Four Pillars of Speed


Car going way to fast
Go fast

To maximize velocity I focused on four key areas:

  1. Deep Legacy Process Understanding: Know thy Enemy

  2. Artifact Reuse:  !reinvent(wheel).

  3. Efficient Testing: Development should be the hard part

  4. Resilient Error Handling: Remember, you might have to support this, be kind to your future self.


1. Deep Legacy Process Understanding -Decoding the PO BPM

The existing BPM was a bit of a monster. Here’s a blurry picture for reference:


 

Blurred complicated BPM
Its intentionally blurred

 Scary at first, but fortunately, after digging in, all it was really doing was orchestrating multiple/sequences of calls to respond to a single synchronous source. Everything seems possible with standard CI objects.

Fundamentally, a BPM uses gateways (Parallel, Sequential, Routers) that have direct equivalents in CI. The message sizes were small and the calls were fast, I knew the CI could handle it, so I could largely keep this in place. After manually digging through the configuration, I realized I could reuse a lot of the structures—a massive head start.


2. Artifact Reuse:  The Great Artifact Rescue: Data Objects and Properties. Moving from SAP PO BPM to Integration Suite


Decoding hieroglyphs
In the old days we coded in stone

The biggest technical hurdle was the Data Objects (DOs). In NW BPM, DOs persist data across different calls, with mappings moving data between the DOs and specific Message Types (MTs). CI doesn't have a native "persistent data object" that behaves exactly like that.

The Solution: Groovy-powered XML fragments. I decided to treat Exchange Properties like DOs. By storing small XML snippets in properties, I could reuse the original message mappings. This saved weeks of analysis and validation (15+ mappings. 35+ if you count input and output DO mappings).

Pro Tip: I looked into reverse-engineering SAP’s "Galaxy" workflow logic but quickly realized that "discretion is the better part of valor." Instead of getting lost in legacy code, I used Groovy to bridge the gap between MTs and properties and got moving. I do think this is a strong maybe idea if a company had to move say 20 or 30 similar interfaces.

3. Testing: Data is King

I lucked out and at this client I had a dedicated test team. Because I was reusing artifacts, we could use the same data and scripts from the SAP PO environment to validate the CI results. This moved my goal post to making a successful unit test instead of testing all the possible variations. Also, because of the DO to property approach, I could give the inputs and outputs in a format similar to legacy that makes it easy to identify and validate if it is working or not.

4. Handling the "Boom": Error Responses within legacy constraints

Troubleshooting without error messages
Troubleshooting without good error messages

For this interface, error handling is tricky. Since the calling system handles retries, my job was to ensure that if anything failed, the error was captured gracefully response back. If there was an error, I was able to tag the interface with a custom status in the monitor and even send a message to an alert framework. It looked the same to end user, but to me I could tell that something broke and even why. Interfaces are really hard to fix without additional details.


A timeline breakdown

I managed to convert, unit-test, and document the interface in about 40 hours work. Here’s what that actually looked like:

  • Day 1: Power-review of the PO system. 5 pages of notes on a  notepad. Realized reverse engineering "Galaxy Workflow" was going to be a nightmare.

  • Day 2: Drew the CI outline. Imported every artifact I could reuse from PO (WSDLS, XSDs, Message Mappings). Thought deeply about iDoc splitting and how that could be logged properly. Found some cool pipeline artifacts to … leverage. Put script placeholders for the scripts I knew I would need to write (Every place we go to and from a DO object for example).

  • Day 3: Groovy goodness. Wrote the scripts, fought with groovy 2.0 upgrade, and got a "deployable" (but broken) flow.

  • Day 4: Unit testing. Fixed parallel multicasting quirks and sync-handling bugs.

  • Day 5: Declared success and wrote the design docs. Confidently told the test team they could take over while I did a victory lap.

..

  • Days 6–8 (Added a ton of lipstick): The "Real-World" polish. On fresh eyes, my groovy looked ugly, so made it prettier, easier to support. Pretty-printed the JSON to match the legacy interface perfectly, externalized parameters. Panicked fix after breaking a naming convention at the last second lost my mind figuring out where I had accidentally called a property with lowercase instead of camelCase.

Lessons Learned

Lessons learned
My army of 1

0. More visibility into the complexity is needed- This requirement came from a business need, with only a high-level understanding of what they were asking. To be fair, when they first heard I was unit testing, I got a “that was fast”, but I don’t know if they understand the challenges unless they review the documentation that shows the before and after. For estimating effort, its so much better to do it starting with a tool like figaf where at least I can show this “interface” you are asking me to move is actually more like 20+ interfaces. I produced a 20+ page TS, but who is going to read that.

1. Groovy is so cool: Helper methods can be great for simplifying code, and there are some cool tricks to make code readable and maintainable. There’s also some hard to debug issues you can run into, and I have some new opinions on AI and vibe coding. I’ve got more tricks to share on this in a future post.

2. Multicasting Quirks: Multicasts in CI (and Camel in general) involve shallow copies. For not the first time, I thought they worked differently than they do, and there’s some gotchas that I want to share in another future post, if only for my own reference.  

 3. The Power of Tools: If I were doing this again, I’d probably want a tool like Figaf (not sponsored). Having an automated migration report or an Excel-based review would have saved my eyes from a lot of clicking through PO objects in the analysis review. Pulling test messages from PO is huge. Better versioning is also good as I lost my code at one point. Haven’t tried the groovy code editor, but I imagine it also would have fit right in.

4. Naming Inconsistencies: Oh boy, the legacy naming conventions often matched, but when they didn’t … boom. I had to do a lot of side-by-side comparison to make sure every lower was lower and every upper was upper. It got worse because I had properties, DO structures, and message types that referred to same fields differently. No, no, num, Num, Number, _num I feel like every variation was used in the field names, and I look forward to standardizing them in the future redo. One thing that I think is missed a lot is going into migrations is a governance gameplan that maximizes readability and supportability to futureproof situations like this.

Final Thoughts

Final thoughts
From the perspective of my dog

Is storing XML fragments in properties the best approach to tackling this problem? …Maybe? I was worried Json wouldn’t be readable for comparison. I do think in the future a time-consuming redesign will be a good idea, but that would have not been fast, especially with all the connection points. I bet someone has done something smarter, but this did work, and it seems right for this requirement. For an as-is migration, it saved weeks of work redoing mapping logic and kept the outputs consistent with multiple systems already proven to work and sped up validation. Again, I would love to know if someone has a smarter way. Please feel free to reach out.

Moving fast in a PO to CI migration requires a mix of legacy knowledge, a deep understanding of CI, Groovy skills, supportability focus, and a healthy respect for testing. But it also requires flexibility and rising to the challenge. I am walking away with a few new tricks, a successful unit test, and I hope a happy customer.

What’s your go-to strategy for high-speed PO BPM to Integration Suite migrations? Let’s me know.


 
 
 

Comments


bottom of page