Merging multiple sources in one flow vs. cloning near-identical flows

I have a fairly complex flow to import job timesheet data from our internal SQL Server application into NetSuite. I need this flow to execute in two situations:

  1. If a NetSuite project has a checkbox set indicating all of its timesheet data needs to be refreshed completely, OR
  2. New timesheet data was entered in the SQL server since the last flow run.

I have 2 sources, one from NetSuite and one from the SQL server. The sources return the unique ID of the job and have flags set to influence the downstream lookups.

However, it's quite possible that a new job entered into NetSuite is flagged as needing all timesheets imported, and the SQL server sees this new data too. This means the flow may run twice for a given job.

The brute force solution seems to be to Clone the flow. The NetSuite source copy will run first, then I'll have it call the SQL copy.

I don't think a Lookup is a solution here because if the initial source returns no records, the rest of the flow doesn't run.

Is there some more elegant solution I'm missing here? I'm trying to avoid unnecessary cloning of flows, to reduce maintenance confusion in the future.

NOTE: The timesheet flow(s) are triggered after another flow which checks for new jobs and then creates them in NetSuite. Another problem I'm having is that it's possible that a new job or new timesheet may be created in the time gap between the "Import new Jobs to NetSuite" flow and the later delta import of "new timesheets". I think I'll try storing the start time of the main flow in a cache so later chained flows can set a limiter on the delta time to export.

You can insert a dummy record when this happens. If you add a preSavePage script on your source export, then check if data.length === 1 and if so return a dummy record that you can then branch off with and perform some other logic.

function preSavePage (options) {
  const isEmpty = !options.data || options.data.length === 0;

  return {
    data: isEmpty ? [{ message: 'This is a dummy record' }] : options.data,
    errors: options.errors,
    abort: false,
    newErrorsAndRetryData: []
  };
}


Can you send a screenshot of what your flow looks like? Is it only 2 exports and 1 import or does it have lots of other things going on?

There is currently no fancy branching, but there are 9 steps total (3 lookups and 6 imports).

This image shows a workflow diagram with multiple steps, each representing a task involving job tasks, timesheet items, and their statuses, all marked as successful. (Captioned by AI)

I have 2 copies of this flow. The first one uses NetSuite as the source; the second uses a SQL Server query. All the logic afterward is the same. The flows are daisy-chained--but this now means my filtering based on last export time is off by a couple minutes.

I suspect I may need to just make One Flow To Rule Them All to ensure all the delta exports get the same "Last Export Time" timestamp. There's no elegant way to pass a timestamp--though I'm about to start experimenting with using a lookup cache.

The lastExportDateTime value is passed from one flow to the next when flows are chained together. So, if the first flow runs with 2025-05-14T17:42:22.526Z, the next flow will use the same timestamp. I had already noticed this behavior during manual runs, and I just confirmed it works the same for scheduled runs.

I can’t think of a clean way to do this without using two flows. One option might be to set up the 8 steps in API Builder, turning them into reusable assets. Then your flow could simply call that API. However, this depends on your data volume — you might run into rate limits.

With that setup, you'd only need one flow with two exports and one import (the import would go to the API you built). Since flows process one export at a time, the first export would hit the API first. You'd just need to set the concurrency to 1 on the connection going to the API.

I'm not sure the lastExportDateTime is passed down the chain, based on my work-in-progress fix for this. I have a working solution!

Here's what I have now:

  • Created a Lookup Cache
  • In the first step of the chain, upsert into the cache key "MaxTimeSheetDelta" with value using a handlebar statement: {{dateFormat "YYYY-MM-DDTHH:mm:ss.sss[Z]" job.parentJob.startedAt}}
  • In subsequent chained flows, get MaxTimeSheetDelta from the cache in a source export transform step.
  • Add an output filter on the source to pass rows with a modified timestamp less than the MaxTimeSheetDelta.

Result: no matter how long prior chained flows take to run, every flow in the chain filters by the same startedAt time to prevent modifications in the source data from showing up too early.

1 Like