Paginated node collects ALL pages before goes to next

I have:

A. Webhook node, which received a request to fetch specific data (customers, or projects).

B. Lookup node which makes paginated GET request to customer or projects endpoint.

C. Azure Blob node to save got data.

When I run this flow from postman, it fetches ALL pages, makes mappings and uploads only one file into Azure. How can i force that after each page fetch, it goes to the Azure to upload the file?

Each page fetch from the lookup or each request sent to the webhook? If the latter, set page size to 1 on your export and choose "One file per page of records" on your import.

I need each individual page to upload to the azure. This is because, when there are 500 pages, lookup source can produce error in the middle and in this case, full data is lost and not uploaded into the azure. So I assume, in such case, I will retry later from the last page.

If the lookup fails in the middle, then it would retry from the beginning of pagination, not from the failed page. I can only think of complicated solutions here. What paging mechanism is this lookup using?

It uses “Page Number Parameter”,

Currently, we are using manual paging. So we created webhook node which receives page number, extracts one page, saves as BLOB file and if this is not last page, calls itself with increased page number.

That seems like a good approach to me and I can’t think of a better way.

We are using this approach and faced following issue:

As flow is started by webhook and data from the lookup node is mapped on the webhook’s result as an additional filed, I am not able to convert such nested JSON into CSV.

When I try to flatten json data, I am receiving error saying original data has one row and transformed data has 5 rows (when lookup data has 5 rows). So, what is recommended step to flatten the json data?

I'd suggest coming to office hours on Tuesday. It's tough to follow what's going on.