You're running into this issue because the script you're using is written for a transform script, but you're actually placing it inside a preSavePage script. The two script types expect different data structures and return formats, which is why your flow breaks when you use the wrong one.
In a transform
script, you return a single record. But in preSavePage
, you must return an object containing data
, errors
, abort
, and newErrorsAndRetryData
. If you don’t, the export shows zero records because integrator.io doesn’t know what to pass forward.
Here’s how your logic should look inside a preSavePage
script:
function preSavePage(options) {
options.data.forEach(page => {
page.forEach(record => {
if (record.tracking) {
record.tracking = record.tracking.split(',');
}
});
});
return {
data: options.data,
errors: options.errors,
abort: false,
newErrorsAndRetryData: []
};
}
This matches your input structure (which is an array of pages), splits the tracking
string into an array, and returns the properly formatted object.
For reference:
Stub for preSavePage
:
/*
* preSavePageFunction stub:
*
* The name of the function can be changed to anything you like.
*
* The function will be passed one 'options' argument that has the following fields:
* 'data' - an array of records representing one page of data. A record can be an object {} or array [] depending on the data source.
* 'files' - file exports only. files[i] contains source file metadata for data[i]. i.e. files[i].fileMeta.fileName.
* 'errors' - an array of errors where each error has the structure {code: '', message: '', source: '', retryDataKey: ''}.
* 'retryData' - a dictionary object containing the retry data for all errors: {retryDataKey: { data: <record>, stage: '', traceKey: ''}}.
* '_exportId' - the _exportId currently running.
* '_connectionId' - the _connectionId currently running.
* '_flowId' - the _flowId currently running.
* '_integrationId' - the _integrationId currently running.
* '_apiId' - the _apiId currently running.
* '_parentIntegrationId' - the parent of the _integrationId currently running.
* 'pageIndex' - 0 based. context is the batch export currently running.
* 'lastExportDateTime' - delta exports only.
* 'currentExportDateTime' - delta exports only.
* 'settings' - all custom settings in scope for the export currently running.
* 'sandbox' - boolean value indicating whether the script is invoked for sandbox.
* 'testMode' - boolean flag indicating test mode and previews.
* 'job' - the job currently running.
*
* The function needs to return an object that has the following fields:
* 'data' - your modified data.
* 'errors' - your modified errors.
* 'abort' - instruct the batch export currently running to stop generating new pages of data.
* 'newErrorsAndRetryData' - return brand new errors linked to retry data: [{retryData: <record>, errors: [<error>]}].
* Throwing an exception will signal a fatal error and stop the flow.
*/
function preSavePage (options) {
// sample code that simply passes on what has been exported
return {
data: options.data,
errors: options.errors,
abort: false,
newErrorsAndRetryData: []
}
}
Stub for transform
:
/*
* transformFunction stub:
*
* The name of the function can be changed to anything you like.
*
* The function will be passed one 'options' argument that has the following fields:
* 'record' - object {} or array [] depending on the data source.
* 'settings' - all custom settings in scope for the transform currently running.
* 'testMode' - boolean flag indicating test mode and previews.
* 'job' - the job currently running.
*
* The function needs to return the transformed record.
* Throwing an exception will return an error for the record.
*/
function transform (options) {
return options.record
}
And just to tie it all together visually: