Ah, that makes sense.
It would be helpful to have a setting that lets retries execute all following steps too, giving users the choice based on their needs.
We are currently sending Order Confirmations (OCs) to SAP Ariba.
Without going too much into the details: in NetSuite, we have opportunities with line items. Each line item can trigger an OC, depending on several variables. This logic is complex and involves multiple decision points.
Typically, when we post an OC to Ariba, we receive either a success or error response:
If the response indicates an error, we update the related records in NetSuite with the status "OC rejected by Ariba", along with the corresponding error message.
If the response is successful, we update the records with "OC submitted."
These statuses are critical for downstream processing in both NetSuite and Celigo.
When we receive an error, we review the message before taking further action. For example, earlier today we received an "unknown error" from Ariba. After retrying the same request, it was processed successfully, but the associated NetSuite records were not updated to reflect this.
This results in inaccurate status tracking for OCs that are ultimately accepted by Ariba, which can lead to issues in subsequent processing.
Hi @nuriensing , were you able to figure out any sort of workaround for your issue? I am having a similar problem and wondering if you were able to find a suitable approach.
Yep, I actually mentioned that just before your comment.
What we're doing in NetSuite is setting a field on the record to something like "Rejected" when an error happens, and logging the error too.
That way, it’s easy to spot the issue in the ERP, and we can restart the whole flow for that record from the beginning — instead of relying on the retry, which doesn’t really help here since it picks up right where the error happened.
If you need more help or brainstorming let me know.
We do the same thing with our internal flows. It's also beneficial this way because users may not have access to both NetSuite and Celigo, so it essentially lets them manage the errors in NetSuite. In our case, the flow is only three steps (export, import, and import back to NetSuite). This would, of course, get more difficult if it's something more complex or is a different use case where you don't have that write-back capability.
In my case, errors typically occur before the initial import, so the record (item receipt) doesn't exist yet in NetSuite to tag. Instead, I write the error status back to a MS SQL table where we source the data (which originally is in a pending status), marking the status as "errored." This gives us visibility into failures and triggers our existing notification workflow.
My similar problem is that when I retry after resolving the issue, the branching logic doesn't re-evaluate to determine if the import succeeded, so the status never updates in the database. I also can’t just rerun the flow, because the readback error status drops it out of a query configured for the initial export out of the SQL table.
My goal is to keep error management out of the database (avoiding manual status updates) but also preserving error visibility there for notifications and other downstream processes. I'm considering two approaches:
Remove the "Proceed" setting so status readback really only happens on success. However, this would require my client to monitor Celigo directly for errors instead of relying on our existing database notifications, which I was sort of trying to avoid.
Create a separate reconciliation flow that periodically checks errored rows in the SQL table against NS and updates statuses automatically.
Any insight or thoughts would be much appreciated. Thank you!
How are these errors typically resolved? Are you modifying the retry data within IO and then retrying? Are you simply just retrying because it's some intermittent error that works when retried? Are you modifying something in NetSuite and then retrying which then makes the import record work? Are you modifying something in the source SQL Server and then retrying?
@tylerlamparter Currently we are receiving errors because there are Purchase Order records being imported to NetSuite that are not as expected (issue with a separate integration), and breaks this integrations dependencies/known requirements.
Once the records are corrected in NetSuite, I can retry the flow and successfully import in this case Item Receipts as intended. However, this is where I run into my status readback issue in the SQL table, as it does not get updated since the branching logic does not get re-evaluated a second time on the retry.
Is it one particular record being fixed in the separate integration? For example, if it's a vendor being fixed, you could listen for Vendor updates in a new flow, call our API to get errors from this flow, then fetch the retry data from the errors, and see if the retry data is applicable to the updated vendor. If it is, you call our API to retry the error.
You would need some way to alert the user that there is an error. I guess in that case, you could have another flow that runs periodically to fetch errors from this flow and then sends them a Slack message, email, Teams message, etc.
You can add a third and fourth approach alongside what Tyler suggested.
3. Introduce a “pending_retry” status in SQL
Instead of trapping the record in an “errored” state, the flow can write back:
"errored" when the import fails
"pending_retry" when the issue has been resolved (either manually or by a small helper script)
Your source query becomes:
WHERE status IN ('pending', 'pending_retry')
This keeps the retry lifecycle predictable. When you retry the flow, the branching logic gets a clean pass because the record re-enters through the same path as any other pending record. It also allows you to keep your SQL-based notifications without needing a separate reconciliation flow.
The trade-off is that something (or someone) has to flip the status back to pending_retry once the root issue is fixed.
4. Operational ownership of errors
Another option is to keep the "errored" status as-is, but treat it as a signal for review. An application manager (or equivalent role) evaluates the error, confirms the upstream issue is resolved, and then manually changes the status back to "pending" or "pending_retry".
This is often cleaner in environments where errors represent data quality issues that shouldn’t be retried automatically. Automatic retries only make sense for transient failures; data-related errors usually need someone to verify and correct the underlying problem.
Given my limited visibility into the upstream process, I'm thinking about combining Options 3 and 4 with a custom trace key override. The idea would be to have our integration owner bulk-update errored records to "pending retry" once the upstream issue is verified, allowing them to flow through as fresh executions. Then, I'm considering configuring a custom trace key based on the source record uniqueID, allowing Celigo to correlate the original error with the successful reprocessing and mark it resolved. This would keep our error visibility and owner in SQL as much as possible, instead of having to manage retries in Celigo. Really appreciate both your insights.