Slow running flow with webhook listener source

We have several flows that use a webhook listener as the source with Netsuite and S3 backup as the remaining steps. We keep running into issues where the flow runs incredibly slowly and don’t resolve themselves until there is a big enough gap where no new records are sent.

eg we had a flow yesterday that gets shipments from one of our warehouses and creates the item fulfillment in Netsuite (with some lookups to Netsuite in the flow to source the tracking links from a custom record).

It took pretty much all day to complete, and finally caught up at 5am after 7 hours of no new shipments.

I’m starting to think that a real time flow is an inefficient way of working this and we need to re-engineer this -

My current thinking is to split the flow from -

Current

1 flow to do everything

Source - webhook listener (with hook to split records per tracking number)

This then branches to -

Branch 1 - Netsuite lookups to get the tracking number + create fulfillment record

Branch 2 - Store JSON backup in S3

As this uses a webhook listener, this happens in real time continuously

Proposed

Split to 2 flows

Flow 1 will still have the webhook listener, but then will just have branch 2 from above to store the JSON in S3

Flow 2 will then do the branch 1 steps from above, but on a schedule and the source will be the S3 bucket from flow 1

Has anyone had similar issues? Is a scheduled flow a much more sensible way of doing this?

Thanks!

While the total time taken isn't necessarily reflective of the total time taken for each record, since new events come in a pile into the same job, it is confusing. For example, if one record comes in at 8 am and lots of other records keep coming in until 5 pm, it doesn't mean that the record that came in at 8 am is still processing. It just means the overall job is still processing and we keep adding new events into the already existing running job. However, if you don't have skip aggregation set on your S3 import, it may actually still be processing that 8 am record and not be backed up because we are attempting to aggregate all records for the job to one file. If you enable skip aggregation, you'd get 1 file created per page of records (which is most likely 1 to a handful of records).

If you don't need this to be real-time, approach 2 is overall better because you can also reduce the messages being sent to the queue of your NetSuite connection and take advantage of batch calls to NetSuite. In your current flow, we probably only send 1 record per API call to NetSuite because that's all we probably have on the page. In a batch flow, we'd have a default of 20 records per page (you can increase this in the export settings). This then lets us send 20 records to NetSuite at a time, freeing up that NetSuite connection queue quite a bit and probably being more performative overall.