Hey everyone โ I just published a new helper tool that makes it easy to identify and remove duplicate scripts from your integrator.io environment.
Why I built this:
Over time, integrations can accumulate a lot of duplicate scripts โ especially when cloning flows or importing templates. These duplicates can clutter your environment and create confusion during maintenance or debugging.
This toolkit helps you automatically:
- Detect scripts with identical logic (via hash comparison)
- Keep the oldest version of each script
- Update all dependencies to point to the original
How it works:
- Detect Duplicate Scripts
Scans all scripts and identifies duplicates using a content-based hash.
- Extract Script Dependencies
Looks through flows, imports, exports, and integrations to find where the duplicate scripts are used.
- Update Script References
Replaces all references to duplicate scripts with the preserved original script ID.
Optional next step:
Once you're done deduping, you can use the "Delete nondependent integrator.io resources" tool to clean up any leftover unused scripts.
Integration name:
Script cleanup & deduplication tool
(available now in the Marketplace)
Hope this helps keep your environments clean and easier to manage! Let me know if you have questions or feedback.
3 Likes
Hi Tyler, that is a great idea and having multiple copies of the same script is definitely a thing that I see a lot.
The template as it is now looks to have a few issues though:
- After running flow 1 It doesn't actually seem to call flow 2 and 3. I think this is because the postresponsemap script after the lookup is not triggered if there is no import after it.
- the comparison looks for scripts that have the same contents-hash, but doesn't take into account scripts that have the same contents but are in prod/sandbox/different environments, or that are copies of each other made using ILM. imho these should never be merged into 1 script.
Thanks for reviewing @basvanditzhuijzen!
-
I just installed the template in another Celigo account, and the flows are triggering each other correctly. The first flow can take some time to trigger the second flow for some reason, so I'll check why it took 7+ minutes in my case. The postResponseMap scripts do run even if there is no flow step after it. This is a change that happened with the new flow UI because we wanted users to be able to add steps in order without having to first add a subsequent flow step and then come back for response mapping/postResponseMap scripting. This also uses the alias functionality so that the listeners on flow 2 and 3 get an alias assigned for them. Using aliases for listeners isn't available to set in the UI, but can be manually set via an API call.
-
I didn't include a check for prod/sandbox because we're pretty close to moving away from this and into the true multi-environment architecture. Also, in the last step of actually updating the resource with the marked-to-keep script, it will fail to update a resource where the sandbox flag on the resource does not match the sandbox flag on the script. With that being said, I just updated the template and republished it to add a group by to the sandbox flag and hash.
Hmm, might be that there is a difference how older and newer Integrator.io instances behave on postresponsemap scripts.
I made a test flow with a postresponsemap script on a lookup that always throws an error.
On an newer EU multi-env (only has production) instance it runs and throws an error:
On our US partner environment as well as well as another older customers US environment the same flow runs without triggering the script.
Maybe it's connected to the postResponseHookToProcessOnChildRecord change? If I cobble together a script where that setting is true it does trigger the postresponsemap function.
I changed the hash transform to include sandbox and the source id so it doesn't try to deduplicate ILM clones:
{{{hash "sha256" "hex" (join "" record.content record.sandbox record._sourceId)}}}
That works to. For new installs the script is updated to just group by sandbox/hash.
I'm guessing one of your accounts is possibly not on the newer microservices architecture and is running on the older monolith architecture. If you ping me the owner user emails, we can check that.