This is not possible straight out of the box with only the HTTP import response mapping.
The main limitation is that Celigo handles each HTTP batch request separately. So if you have 100 records and set the HTTP import to send 10 records per request, Celigo will create 10 separate API calls.
For example:
100 records→ batch size of 10→ 10 HTTP requests→ 10 separate HTTP responses
Each response mapping only has access to the response from the current HTTP request. It does not have access to the responses from the other batches in the same flow run.
So if batch 1 returns:
{ "processed": ["sku-1", "sku-2"]}
and batch 2 returns:
{ "processed": ["sku-11", "sku-12"]}
Celigo will not automatically give you one combined object like this inside response mapping:
{ "allResponses": [
{"processed": ["sku-1", "sku-2"]},
{"processed": ["sku-11", "sku-12"]}
]}
That type of cross-batch aggregation is not available in standard response mapping.
You could send each batch result to an external endpoint, database, queue, or temporary storage layer, and let that system combine the responses.
Another option, if the destination API supports it, is to send a shared jobId or correlationId with every batch request. Then the receiving system can group all 30 API calls under the same job and build the final result on its side.
For example:
{
"jobId": "{{job._id}}",
"products": [ ... ]}
So in short: Celigo can handle the 10-record batching and the delay between requests, but it cannot natively combine all batch responses into one final JSON object in the same HTTP response mapping. For that part, you would need either an external accumulator or support from the destination API using something like a jobId or correlationId.